Oct 8 19:51:07.910547 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 19:51:07.910578 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:07.910589 kernel: BIOS-provided physical RAM map: Oct 8 19:51:07.910596 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 8 19:51:07.910602 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 8 19:51:07.910608 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 8 19:51:07.910615 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 8 19:51:07.910622 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 8 19:51:07.910628 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 8 19:51:07.910634 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 8 19:51:07.910646 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 8 19:51:07.910652 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 8 19:51:07.910658 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 8 19:51:07.910665 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 8 19:51:07.910673 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 8 19:51:07.910682 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 8 19:51:07.910691 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 8 19:51:07.910698 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 8 19:51:07.910705 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 8 19:51:07.910711 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 19:51:07.910718 kernel: NX (Execute Disable) protection: active Oct 8 19:51:07.910725 kernel: APIC: Static calls initialized Oct 8 19:51:07.910731 kernel: efi: EFI v2.7 by EDK II Oct 8 19:51:07.910738 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Oct 8 19:51:07.910745 kernel: SMBIOS 2.8 present. Oct 8 19:51:07.910751 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 8 19:51:07.910758 kernel: Hypervisor detected: KVM Oct 8 19:51:07.910767 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 19:51:07.910774 kernel: kvm-clock: using sched offset of 5510964578 cycles Oct 8 19:51:07.910781 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 19:51:07.910789 kernel: tsc: Detected 2794.748 MHz processor Oct 8 19:51:07.910796 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 19:51:07.910803 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 19:51:07.910810 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 8 19:51:07.910817 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 8 19:51:07.910824 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 19:51:07.910833 kernel: Using GB pages for direct mapping Oct 8 19:51:07.910840 kernel: Secure boot disabled Oct 8 19:51:07.910847 kernel: ACPI: Early table checksum verification disabled Oct 8 19:51:07.910854 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 8 19:51:07.910867 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:51:07.910875 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:07.910882 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:07.910892 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 8 19:51:07.910899 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:07.910906 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:07.910913 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:07.910921 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:07.910928 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 19:51:07.910935 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 8 19:51:07.910945 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 8 19:51:07.910953 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 8 19:51:07.910960 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 8 19:51:07.910967 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 8 19:51:07.910974 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 8 19:51:07.910981 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 8 19:51:07.910988 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 8 19:51:07.910997 kernel: No NUMA configuration found Oct 8 19:51:07.911005 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 8 19:51:07.911036 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 8 19:51:07.911043 kernel: Zone ranges: Oct 8 19:51:07.911050 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 19:51:07.911057 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 8 19:51:07.911064 kernel: Normal empty Oct 8 19:51:07.911072 kernel: Movable zone start for each node Oct 8 19:51:07.911079 kernel: Early memory node ranges Oct 8 19:51:07.911086 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 8 19:51:07.911093 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 8 19:51:07.911100 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 8 19:51:07.911109 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 8 19:51:07.911117 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 8 19:51:07.911124 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 8 19:51:07.911131 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 8 19:51:07.911140 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:51:07.911148 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 8 19:51:07.911155 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 8 19:51:07.911162 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:51:07.911169 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 8 19:51:07.911179 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 8 19:51:07.911186 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 8 19:51:07.911193 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 19:51:07.911201 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 19:51:07.911208 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 19:51:07.911215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 19:51:07.911222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 19:51:07.911229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 19:51:07.911236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 19:51:07.911246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 19:51:07.911253 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 19:51:07.911260 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 19:51:07.911267 kernel: TSC deadline timer available Oct 8 19:51:07.911275 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 8 19:51:07.911282 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 19:51:07.911289 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 8 19:51:07.911296 kernel: kvm-guest: setup PV sched yield Oct 8 19:51:07.911303 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 8 19:51:07.911313 kernel: Booting paravirtualized kernel on KVM Oct 8 19:51:07.911320 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 19:51:07.911327 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 8 19:51:07.911335 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 8 19:51:07.911342 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 8 19:51:07.911349 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 8 19:51:07.911356 kernel: kvm-guest: PV spinlocks enabled Oct 8 19:51:07.911363 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 19:51:07.911373 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:07.911384 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:51:07.911391 kernel: random: crng init done Oct 8 19:51:07.911398 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:51:07.911405 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:51:07.911412 kernel: Fallback order for Node 0: 0 Oct 8 19:51:07.911420 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 8 19:51:07.911427 kernel: Policy zone: DMA32 Oct 8 19:51:07.911435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:51:07.911445 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Oct 8 19:51:07.911452 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:51:07.911460 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 19:51:07.911467 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 19:51:07.911474 kernel: Dynamic Preempt: voluntary Oct 8 19:51:07.911501 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:51:07.911513 kernel: rcu: RCU event tracing is enabled. Oct 8 19:51:07.911520 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:51:07.911528 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:51:07.911535 kernel: Rude variant of Tasks RCU enabled. Oct 8 19:51:07.911543 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:51:07.911550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:51:07.911560 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:51:07.911568 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 8 19:51:07.911575 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:51:07.911583 kernel: Console: colour dummy device 80x25 Oct 8 19:51:07.911593 kernel: printk: console [ttyS0] enabled Oct 8 19:51:07.911603 kernel: ACPI: Core revision 20230628 Oct 8 19:51:07.911611 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 19:51:07.911618 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 19:51:07.911626 kernel: x2apic enabled Oct 8 19:51:07.911633 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 19:51:07.911641 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 8 19:51:07.911649 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 8 19:51:07.911656 kernel: kvm-guest: setup PV IPIs Oct 8 19:51:07.911664 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 19:51:07.911674 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 19:51:07.911681 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 8 19:51:07.911689 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 19:51:07.911697 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 19:51:07.911704 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 19:51:07.911712 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 19:51:07.911719 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 19:51:07.911727 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 19:51:07.911734 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 19:51:07.911744 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 19:51:07.911752 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 19:51:07.911761 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 19:51:07.911769 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 19:51:07.911777 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 19:51:07.911785 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 19:51:07.911792 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 19:51:07.911800 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 19:51:07.911810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 19:51:07.911817 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 19:51:07.911825 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 19:51:07.911832 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 19:51:07.911840 kernel: Freeing SMP alternatives memory: 32K Oct 8 19:51:07.911847 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:51:07.911855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:51:07.911862 kernel: landlock: Up and running. Oct 8 19:51:07.911870 kernel: SELinux: Initializing. Oct 8 19:51:07.911880 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:51:07.911887 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:51:07.911895 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 19:51:07.911902 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:07.911910 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:07.911917 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:07.911925 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 19:51:07.911932 kernel: ... version: 0 Oct 8 19:51:07.911940 kernel: ... bit width: 48 Oct 8 19:51:07.911950 kernel: ... generic registers: 6 Oct 8 19:51:07.911958 kernel: ... value mask: 0000ffffffffffff Oct 8 19:51:07.911965 kernel: ... max period: 00007fffffffffff Oct 8 19:51:07.911973 kernel: ... fixed-purpose events: 0 Oct 8 19:51:07.911980 kernel: ... event mask: 000000000000003f Oct 8 19:51:07.911987 kernel: signal: max sigframe size: 1776 Oct 8 19:51:07.911995 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:51:07.912003 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:51:07.912022 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:51:07.912029 kernel: smpboot: x86: Booting SMP configuration: Oct 8 19:51:07.912040 kernel: .... node #0, CPUs: #1 #2 #3 Oct 8 19:51:07.912047 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:51:07.912055 kernel: smpboot: Max logical packages: 1 Oct 8 19:51:07.912062 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 8 19:51:07.912070 kernel: devtmpfs: initialized Oct 8 19:51:07.912077 kernel: x86/mm: Memory block size: 128MB Oct 8 19:51:07.912085 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 8 19:51:07.912092 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 8 19:51:07.912100 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 8 19:51:07.912110 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 8 19:51:07.912118 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 8 19:51:07.912126 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:51:07.912133 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:51:07.912143 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:51:07.912151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:51:07.912159 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:51:07.912166 kernel: audit: type=2000 audit(1728417067.107:1): state=initialized audit_enabled=0 res=1 Oct 8 19:51:07.912176 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:51:07.912184 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 19:51:07.912191 kernel: cpuidle: using governor menu Oct 8 19:51:07.912199 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:51:07.912207 kernel: dca service started, version 1.12.1 Oct 8 19:51:07.912214 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 19:51:07.912222 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 8 19:51:07.912230 kernel: PCI: Using configuration type 1 for base access Oct 8 19:51:07.912237 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 19:51:07.912248 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:51:07.912255 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:51:07.912263 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:51:07.912270 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:51:07.912278 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:51:07.912285 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:51:07.912293 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:51:07.912300 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:51:07.912308 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:51:07.912318 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 19:51:07.912325 kernel: ACPI: Interpreter enabled Oct 8 19:51:07.912333 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 19:51:07.912340 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 19:51:07.912348 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 19:51:07.912355 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 19:51:07.912363 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 19:51:07.912370 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:51:07.912593 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:51:07.912734 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 19:51:07.912862 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 19:51:07.912872 kernel: PCI host bridge to bus 0000:00 Oct 8 19:51:07.913048 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 19:51:07.913178 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 19:51:07.913294 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 19:51:07.913414 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 8 19:51:07.913540 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 19:51:07.913655 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 8 19:51:07.913769 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:51:07.913920 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 19:51:07.914084 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 8 19:51:07.914215 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 8 19:51:07.914348 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 8 19:51:07.914472 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 8 19:51:07.914612 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 8 19:51:07.914740 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 19:51:07.914888 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:51:07.915032 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 8 19:51:07.915162 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 8 19:51:07.915294 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 8 19:51:07.915465 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 8 19:51:07.915604 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 8 19:51:07.915731 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 8 19:51:07.915856 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 8 19:51:07.915999 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 19:51:07.916148 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 8 19:51:07.916274 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 8 19:51:07.916398 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 8 19:51:07.916536 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 8 19:51:07.916678 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 19:51:07.916807 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 19:51:07.916949 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 19:51:07.917154 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 8 19:51:07.917284 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 8 19:51:07.917426 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 19:51:07.917561 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 8 19:51:07.917572 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 19:51:07.917580 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 19:51:07.917588 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 19:51:07.917595 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 19:51:07.917608 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 19:51:07.917615 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 19:51:07.917623 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 19:51:07.917630 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 19:51:07.917638 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 19:51:07.917645 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 19:51:07.917653 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 19:51:07.917661 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 19:51:07.917668 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 19:51:07.917678 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 19:51:07.917686 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 19:51:07.917694 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 19:51:07.917701 kernel: iommu: Default domain type: Translated Oct 8 19:51:07.917709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 19:51:07.917717 kernel: efivars: Registered efivars operations Oct 8 19:51:07.917724 kernel: PCI: Using ACPI for IRQ routing Oct 8 19:51:07.917732 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 19:51:07.917739 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 8 19:51:07.917750 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 8 19:51:07.917758 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 8 19:51:07.917765 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 8 19:51:07.917891 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 19:51:07.918124 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 19:51:07.918259 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 19:51:07.918269 kernel: vgaarb: loaded Oct 8 19:51:07.918277 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 19:51:07.918290 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 19:51:07.918298 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 19:51:07.918306 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:51:07.918313 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:51:07.918321 kernel: pnp: PnP ACPI init Oct 8 19:51:07.918488 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 19:51:07.918509 kernel: pnp: PnP ACPI: found 6 devices Oct 8 19:51:07.918517 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 19:51:07.918525 kernel: NET: Registered PF_INET protocol family Oct 8 19:51:07.918536 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:51:07.918544 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:51:07.918552 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:51:07.918560 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:51:07.918568 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:51:07.918576 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:51:07.918584 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:51:07.918592 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:51:07.918602 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:51:07.918610 kernel: NET: Registered PF_XDP protocol family Oct 8 19:51:07.918739 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 8 19:51:07.918865 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 8 19:51:07.918983 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 19:51:07.919116 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 19:51:07.919232 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 19:51:07.919350 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 8 19:51:07.919470 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 19:51:07.919603 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 8 19:51:07.919614 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:51:07.919622 kernel: Initialise system trusted keyrings Oct 8 19:51:07.919630 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:51:07.919637 kernel: Key type asymmetric registered Oct 8 19:51:07.919645 kernel: Asymmetric key parser 'x509' registered Oct 8 19:51:07.919653 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 19:51:07.919661 kernel: io scheduler mq-deadline registered Oct 8 19:51:07.919672 kernel: io scheduler kyber registered Oct 8 19:51:07.919680 kernel: io scheduler bfq registered Oct 8 19:51:07.919688 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 19:51:07.919696 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 19:51:07.919704 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 19:51:07.919712 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 8 19:51:07.919724 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:51:07.919732 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 19:51:07.919740 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 19:51:07.919750 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 19:51:07.919758 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 19:51:07.919905 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 19:51:07.919917 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 19:51:07.920051 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 19:51:07.920172 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T19:51:07 UTC (1728417067) Oct 8 19:51:07.920291 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 19:51:07.920301 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 19:51:07.920313 kernel: efifb: probing for efifb Oct 8 19:51:07.920321 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 8 19:51:07.920329 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 8 19:51:07.920337 kernel: efifb: scrolling: redraw Oct 8 19:51:07.920344 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 8 19:51:07.920352 kernel: Console: switching to colour frame buffer device 100x37 Oct 8 19:51:07.920380 kernel: fb0: EFI VGA frame buffer device Oct 8 19:51:07.920391 kernel: pstore: Using crash dump compression: deflate Oct 8 19:51:07.920399 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 19:51:07.920410 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:51:07.920418 kernel: Segment Routing with IPv6 Oct 8 19:51:07.920425 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:51:07.920433 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:51:07.920441 kernel: Key type dns_resolver registered Oct 8 19:51:07.920449 kernel: IPI shorthand broadcast: enabled Oct 8 19:51:07.920457 kernel: sched_clock: Marking stable (1173003259, 123050801)->(1357099847, -61045787) Oct 8 19:51:07.920465 kernel: registered taskstats version 1 Oct 8 19:51:07.920473 kernel: Loading compiled-in X.509 certificates Oct 8 19:51:07.920484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 19:51:07.920500 kernel: Key type .fscrypt registered Oct 8 19:51:07.920509 kernel: Key type fscrypt-provisioning registered Oct 8 19:51:07.920517 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:51:07.920525 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:51:07.920533 kernel: ima: No architecture policies found Oct 8 19:51:07.920541 kernel: clk: Disabling unused clocks Oct 8 19:51:07.920549 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 19:51:07.920557 kernel: Write protecting the kernel read-only data: 36864k Oct 8 19:51:07.920568 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 19:51:07.920576 kernel: Run /init as init process Oct 8 19:51:07.920584 kernel: with arguments: Oct 8 19:51:07.920592 kernel: /init Oct 8 19:51:07.920599 kernel: with environment: Oct 8 19:51:07.920607 kernel: HOME=/ Oct 8 19:51:07.920615 kernel: TERM=linux Oct 8 19:51:07.920623 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:51:07.920633 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:51:07.920646 systemd[1]: Detected virtualization kvm. Oct 8 19:51:07.920655 systemd[1]: Detected architecture x86-64. Oct 8 19:51:07.920663 systemd[1]: Running in initrd. Oct 8 19:51:07.920674 systemd[1]: No hostname configured, using default hostname. Oct 8 19:51:07.920689 systemd[1]: Hostname set to . Oct 8 19:51:07.920701 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:51:07.920712 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:51:07.920725 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:51:07.920751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:51:07.920761 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:51:07.920773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:51:07.920794 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:51:07.920809 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:51:07.920830 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:51:07.920845 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:51:07.920860 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:51:07.920878 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:51:07.920894 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:51:07.920915 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:51:07.920923 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:51:07.920932 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:51:07.920940 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:51:07.920949 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:51:07.920957 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:51:07.920966 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:51:07.920974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:51:07.920983 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:51:07.920994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:51:07.921003 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:51:07.921027 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:51:07.921039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:51:07.921051 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:51:07.921061 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:51:07.921070 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:51:07.921078 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:51:07.921087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:07.921099 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:51:07.921107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:51:07.921116 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:51:07.921145 systemd-journald[192]: Collecting audit messages is disabled. Oct 8 19:51:07.921167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:51:07.921176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:07.921185 systemd-journald[192]: Journal started Oct 8 19:51:07.921205 systemd-journald[192]: Runtime Journal (/run/log/journal/2a14975738174560846f86052e67eb06) is 6.0M, max 48.3M, 42.2M free. Oct 8 19:51:07.921358 systemd-modules-load[193]: Inserted module 'overlay' Oct 8 19:51:07.925860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:07.928239 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:51:07.935554 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:51:07.953053 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:51:07.955235 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:51:07.958347 kernel: Bridge firewalling registered Oct 8 19:51:07.955718 systemd-modules-load[193]: Inserted module 'br_netfilter' Oct 8 19:51:07.960166 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:51:07.963431 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:51:07.966516 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:07.969702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:51:07.976223 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:51:07.979597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:51:07.982264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:51:07.994116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:51:07.996668 dracut-cmdline[224]: dracut-dracut-053 Oct 8 19:51:08.000483 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:08.009261 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:51:08.045593 systemd-resolved[239]: Positive Trust Anchors: Oct 8 19:51:08.045622 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:51:08.045655 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:51:08.057271 systemd-resolved[239]: Defaulting to hostname 'linux'. Oct 8 19:51:08.059553 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:51:08.060174 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:51:08.100072 kernel: SCSI subsystem initialized Oct 8 19:51:08.112062 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:51:08.127046 kernel: iscsi: registered transport (tcp) Oct 8 19:51:08.155046 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:51:08.155122 kernel: QLogic iSCSI HBA Driver Oct 8 19:51:08.311602 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:51:08.320189 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:51:08.347044 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:51:08.347087 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:51:08.349032 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:51:08.393036 kernel: raid6: avx2x4 gen() 20441 MB/s Oct 8 19:51:08.410030 kernel: raid6: avx2x2 gen() 19795 MB/s Oct 8 19:51:08.427368 kernel: raid6: avx2x1 gen() 23326 MB/s Oct 8 19:51:08.427399 kernel: raid6: using algorithm avx2x1 gen() 23326 MB/s Oct 8 19:51:08.445416 kernel: raid6: .... xor() 10590 MB/s, rmw enabled Oct 8 19:51:08.445535 kernel: raid6: using avx2x2 recovery algorithm Oct 8 19:51:08.471072 kernel: xor: automatically using best checksumming function avx Oct 8 19:51:08.647075 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:51:08.662298 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:51:08.673241 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:51:08.686607 systemd-udevd[414]: Using default interface naming scheme 'v255'. Oct 8 19:51:08.691701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:51:08.708144 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:51:08.722799 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Oct 8 19:51:08.757977 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:51:08.774166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:51:08.848559 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:51:08.859232 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:51:08.873570 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:51:08.875686 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:51:08.875944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:51:08.881323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:51:08.890203 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:51:08.896095 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 8 19:51:08.896404 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:51:08.902428 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:51:08.906864 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:51:08.906893 kernel: GPT:9289727 != 19775487 Oct 8 19:51:08.906911 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:51:08.906937 kernel: GPT:9289727 != 19775487 Oct 8 19:51:08.906965 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:51:08.906992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:08.912041 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 19:51:08.929077 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 19:51:08.929136 kernel: AES CTR mode by8 optimization enabled Oct 8 19:51:08.933030 kernel: libata version 3.00 loaded. Oct 8 19:51:08.935729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:51:08.935879 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:08.940817 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:08.948327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:51:08.948933 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:08.954028 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (468) Oct 8 19:51:08.955196 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:08.958576 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (475) Oct 8 19:51:08.960765 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 19:51:08.961000 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 19:51:08.961061 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 19:51:08.962946 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 19:51:08.965602 kernel: scsi host0: ahci Oct 8 19:51:08.965792 kernel: scsi host1: ahci Oct 8 19:51:08.965353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:08.968236 kernel: scsi host2: ahci Oct 8 19:51:08.971182 kernel: scsi host3: ahci Oct 8 19:51:08.973997 kernel: scsi host4: ahci Oct 8 19:51:08.974202 kernel: scsi host5: ahci Oct 8 19:51:08.974371 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 8 19:51:08.974384 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 8 19:51:08.975078 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 8 19:51:08.975102 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 8 19:51:08.975118 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 8 19:51:08.975128 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 8 19:51:08.983849 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:51:08.989794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:08.998309 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:51:09.008033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:51:09.009311 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:51:09.016541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:51:09.033261 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:51:09.036833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:09.043540 disk-uuid[567]: Primary Header is updated. Oct 8 19:51:09.043540 disk-uuid[567]: Secondary Entries is updated. Oct 8 19:51:09.043540 disk-uuid[567]: Secondary Header is updated. Oct 8 19:51:09.047046 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:09.052048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:09.082782 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:09.287275 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:09.287348 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:09.287362 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 19:51:09.289030 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:09.289059 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:09.290029 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 19:51:09.291027 kernel: ata3.00: applying bridge limits Oct 8 19:51:09.291039 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:09.292028 kernel: ata3.00: configured for UDMA/100 Oct 8 19:51:09.294038 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:51:09.341539 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 19:51:09.341762 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:51:09.356238 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:51:10.069050 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:10.069522 disk-uuid[568]: The operation has completed successfully. Oct 8 19:51:10.103260 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:51:10.103393 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:51:10.135419 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:51:10.152318 sh[591]: Success Oct 8 19:51:10.167041 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 19:51:10.208906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:51:10.224171 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:51:10.227481 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:51:10.265787 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 19:51:10.265879 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:10.265909 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:51:10.266976 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:51:10.267847 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:51:10.273754 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:51:10.275735 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:51:10.296259 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:51:10.298181 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:51:10.308360 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:10.308393 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:10.308405 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:10.312054 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:10.321910 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:51:10.323418 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:10.422609 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:51:10.439165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:51:10.485438 systemd-networkd[769]: lo: Link UP Oct 8 19:51:10.485451 systemd-networkd[769]: lo: Gained carrier Oct 8 19:51:10.487447 systemd-networkd[769]: Enumeration completed Oct 8 19:51:10.487917 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:10.487922 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:51:10.489727 systemd-networkd[769]: eth0: Link UP Oct 8 19:51:10.489732 systemd-networkd[769]: eth0: Gained carrier Oct 8 19:51:10.489739 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:10.494066 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:51:10.498312 systemd[1]: Reached target network.target - Network. Oct 8 19:51:10.528176 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:51:10.582264 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:51:10.594251 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:51:10.659754 ignition[774]: Ignition 2.19.0 Oct 8 19:51:10.659767 ignition[774]: Stage: fetch-offline Oct 8 19:51:10.659809 ignition[774]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:10.659820 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:10.659929 ignition[774]: parsed url from cmdline: "" Oct 8 19:51:10.659934 ignition[774]: no config URL provided Oct 8 19:51:10.659939 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:51:10.659950 ignition[774]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:51:10.659982 ignition[774]: op(1): [started] loading QEMU firmware config module Oct 8 19:51:10.659987 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:51:10.668165 ignition[774]: op(1): [finished] loading QEMU firmware config module Oct 8 19:51:10.707484 ignition[774]: parsing config with SHA512: c34c8875f9dfa4ba179c92e08f04ba976cc6782600feb73f271a77f139efb5836a0b6ecbc95283a68158dbf1a69c59b51f1ce7776437581c45fed4959d9ad6d1 Oct 8 19:51:10.714556 unknown[774]: fetched base config from "system" Oct 8 19:51:10.714571 unknown[774]: fetched user config from "qemu" Oct 8 19:51:10.715118 ignition[774]: fetch-offline: fetch-offline passed Oct 8 19:51:10.715190 ignition[774]: Ignition finished successfully Oct 8 19:51:10.717896 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:51:10.720110 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:51:10.731323 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:51:10.765757 ignition[784]: Ignition 2.19.0 Oct 8 19:51:10.765772 ignition[784]: Stage: kargs Oct 8 19:51:10.766060 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:10.766074 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:10.771085 ignition[784]: kargs: kargs passed Oct 8 19:51:10.771850 ignition[784]: Ignition finished successfully Oct 8 19:51:10.776224 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:51:10.790242 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:51:10.823921 ignition[793]: Ignition 2.19.0 Oct 8 19:51:10.823933 ignition[793]: Stage: disks Oct 8 19:51:10.824128 ignition[793]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:10.824140 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:10.827436 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:51:10.824963 ignition[793]: disks: disks passed Oct 8 19:51:10.829650 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:51:10.825029 ignition[793]: Ignition finished successfully Oct 8 19:51:10.831740 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:51:10.834115 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:51:10.835205 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:51:10.836246 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:51:10.848393 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:51:10.865652 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:51:10.873726 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:51:10.888257 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:51:11.016041 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 19:51:11.016583 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:51:11.018804 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:51:11.027185 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:51:11.029511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:51:11.030356 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:51:11.037123 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Oct 8 19:51:11.030426 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:51:11.042963 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:11.042996 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:11.043022 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:11.030464 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:51:11.045143 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:11.047056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:51:11.076674 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:51:11.079495 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:51:11.120423 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:51:11.124917 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:51:11.130896 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:51:11.136727 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:51:11.228242 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:51:11.240092 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:51:11.246639 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:51:11.256872 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:51:11.260315 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:11.280904 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:51:11.288329 ignition[925]: INFO : Ignition 2.19.0 Oct 8 19:51:11.288329 ignition[925]: INFO : Stage: mount Oct 8 19:51:11.290818 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:11.290818 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:11.290818 ignition[925]: INFO : mount: mount passed Oct 8 19:51:11.290818 ignition[925]: INFO : Ignition finished successfully Oct 8 19:51:11.295781 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:51:11.305261 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:51:11.316196 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:51:11.330061 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Oct 8 19:51:11.330169 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:11.332193 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:11.332223 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:11.336039 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:11.338744 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:51:11.369518 ignition[954]: INFO : Ignition 2.19.0 Oct 8 19:51:11.369518 ignition[954]: INFO : Stage: files Oct 8 19:51:11.371710 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:11.371710 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:11.371710 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:51:11.375430 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:51:11.375430 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:51:11.380532 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:51:11.382264 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:51:11.382264 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:51:11.381290 unknown[954]: wrote ssh authorized keys file for user: core Oct 8 19:51:11.386731 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:51:11.386731 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 19:51:11.426591 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:51:11.629941 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:51:11.629941 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:51:11.634134 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 8 19:51:12.103465 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:51:12.364570 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:51:12.364570 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 19:51:12.368645 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 8 19:51:12.440293 systemd-networkd[769]: eth0: Gained IPv6LL Oct 8 19:51:12.639060 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 19:51:13.068470 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 19:51:13.068470 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 8 19:51:13.072767 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:51:13.072767 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:51:13.072767 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 8 19:51:13.072767 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 8 19:51:13.072767 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:51:13.072767 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:51:13.072767 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 8 19:51:13.072767 ignition[954]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:51:13.107673 ignition[954]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:51:13.112993 ignition[954]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:51:13.114777 ignition[954]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:51:13.114777 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:51:13.114777 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:51:13.114777 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:51:13.114777 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:51:13.114777 ignition[954]: INFO : files: files passed Oct 8 19:51:13.114777 ignition[954]: INFO : Ignition finished successfully Oct 8 19:51:13.127146 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:51:13.134623 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:51:13.138444 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:51:13.141975 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:51:13.143499 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:51:13.166435 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:51:13.172158 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:13.172158 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:13.175742 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:13.179965 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:51:13.180655 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:51:13.189735 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:51:13.219376 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:51:13.220671 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:51:13.223939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:51:13.226320 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:51:13.228662 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:51:13.231331 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:51:13.254927 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:51:13.270413 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:51:13.293579 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:51:13.294525 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:51:13.296989 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:51:13.300205 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:51:13.300377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:51:13.301371 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:51:13.301737 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:51:13.302089 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:51:13.308640 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:51:13.309047 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:51:13.309477 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:51:13.309861 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:51:13.317786 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:51:13.320505 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:51:13.320841 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:51:13.321404 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:51:13.321557 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:51:13.328302 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:51:13.328929 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:51:13.329483 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:51:13.329683 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:51:13.334782 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:51:13.335073 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:51:13.340136 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:51:13.340386 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:51:13.342588 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:51:13.345331 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:51:13.345530 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:51:13.347328 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:51:13.347657 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:51:13.348073 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:51:13.348217 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:51:13.348644 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:51:13.348764 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:51:13.355882 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:51:13.356082 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:51:13.357793 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:51:13.357926 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:51:13.376402 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:51:13.377905 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:51:13.381106 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:51:13.381356 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:51:13.384451 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:51:13.384587 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:51:13.391466 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:51:13.391630 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:51:13.398755 ignition[1009]: INFO : Ignition 2.19.0 Oct 8 19:51:13.398755 ignition[1009]: INFO : Stage: umount Oct 8 19:51:13.398755 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:13.398755 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:13.398755 ignition[1009]: INFO : umount: umount passed Oct 8 19:51:13.398755 ignition[1009]: INFO : Ignition finished successfully Oct 8 19:51:13.396149 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:51:13.396274 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:51:13.399655 systemd[1]: Stopped target network.target - Network. Oct 8 19:51:13.400946 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:51:13.401088 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:51:13.403122 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:51:13.403176 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:51:13.405644 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:51:13.405707 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:51:13.408258 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:51:13.408314 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:51:13.411201 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:51:13.413490 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:51:13.416161 systemd-networkd[769]: eth0: DHCPv6 lease lost Oct 8 19:51:13.417428 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:51:13.418333 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:51:13.418569 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:51:13.422679 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:51:13.422854 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:51:13.427366 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:51:13.427454 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:51:13.439375 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:51:13.441459 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:51:13.441570 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:51:13.444329 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:51:13.444431 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:51:13.447118 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:51:13.447198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:51:13.449800 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:51:13.449878 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:51:13.453034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:51:13.469931 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:51:13.471249 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:51:13.474086 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:51:13.475471 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:51:13.479582 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:51:13.481031 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:51:13.483867 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:51:13.483939 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:51:13.487625 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:51:13.488748 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:51:13.491084 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:51:13.491160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:51:13.494760 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:51:13.495962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:13.515211 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:51:13.517903 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:51:13.517992 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:51:13.520776 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 19:51:13.520841 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:51:13.523775 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:51:13.523837 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:51:13.526683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:51:13.526752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:13.534776 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:51:13.534918 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:51:13.632413 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:51:13.633483 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:51:13.635554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:51:13.637903 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:51:13.638939 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:51:13.653253 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:51:13.661697 systemd[1]: Switching root. Oct 8 19:51:13.692966 systemd-journald[192]: Journal stopped Oct 8 19:51:14.970627 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Oct 8 19:51:14.970726 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:51:14.970740 kernel: SELinux: policy capability open_perms=1 Oct 8 19:51:14.970751 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:51:14.970763 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:51:14.970774 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:51:14.970786 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:51:14.970801 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:51:14.970817 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:51:14.970828 kernel: audit: type=1403 audit(1728417074.197:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:51:14.970846 systemd[1]: Successfully loaded SELinux policy in 48.842ms. Oct 8 19:51:14.970866 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.374ms. Oct 8 19:51:14.970879 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:51:14.970892 systemd[1]: Detected virtualization kvm. Oct 8 19:51:14.970906 systemd[1]: Detected architecture x86-64. Oct 8 19:51:14.970923 systemd[1]: Detected first boot. Oct 8 19:51:14.970940 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:51:14.970953 zram_generator::config[1054]: No configuration found. Oct 8 19:51:14.970973 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:51:14.970985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:51:14.970997 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:51:14.971031 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:51:14.971051 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:51:14.971069 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:51:14.971087 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:51:14.971104 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:51:14.971121 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:51:14.971148 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:51:14.971173 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:51:14.971196 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:51:14.971214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:51:14.971232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:51:14.971250 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:51:14.971267 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:51:14.971308 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:51:14.971335 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:51:14.971353 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:51:14.971371 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:51:14.971389 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:51:14.971406 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:51:14.971424 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:51:14.971441 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:51:14.971459 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:51:14.971487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:51:14.971506 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:51:14.971523 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:51:14.971541 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:51:14.971556 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:51:14.971572 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:51:14.971589 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:51:14.971613 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:51:14.971628 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:51:14.971644 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:51:14.971669 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:51:14.971686 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:51:14.971701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:14.971713 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:51:14.971726 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:51:14.971738 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:51:14.971750 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:51:14.971762 systemd[1]: Reached target machines.target - Containers. Oct 8 19:51:14.971781 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:51:14.971794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:51:14.971806 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:51:14.971820 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:51:14.971838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:51:14.971855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:51:14.971871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:51:14.971887 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:51:14.971911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:51:14.971926 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:51:14.971946 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:51:14.971959 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:51:14.971971 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:51:14.971983 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:51:14.971994 kernel: fuse: init (API version 7.39) Oct 8 19:51:14.972020 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:51:14.972035 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:51:14.972060 kernel: loop: module loaded Oct 8 19:51:14.972078 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:51:14.972095 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:51:14.972113 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:51:14.972158 systemd-journald[1124]: Collecting audit messages is disabled. Oct 8 19:51:14.972189 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:51:14.972206 systemd[1]: Stopped verity-setup.service. Oct 8 19:51:14.972221 systemd-journald[1124]: Journal started Oct 8 19:51:14.972257 systemd-journald[1124]: Runtime Journal (/run/log/journal/2a14975738174560846f86052e67eb06) is 6.0M, max 48.3M, 42.2M free. Oct 8 19:51:14.745834 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:51:14.762052 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:51:14.762605 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:51:14.980513 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:14.980597 kernel: ACPI: bus type drm_connector registered Oct 8 19:51:14.980616 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:51:14.983196 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:51:14.984920 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:51:14.994916 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:51:14.996658 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:51:14.998241 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:51:14.999708 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:51:15.001194 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:51:15.003171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:51:15.005072 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:51:15.005324 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:51:15.007274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:51:15.007525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:51:15.009358 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:51:15.009655 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:51:15.011702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:51:15.011927 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:51:15.013783 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:51:15.014030 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:51:15.016051 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:51:15.016267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:51:15.018216 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:51:15.019987 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:51:15.021738 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:51:15.038147 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:51:15.051371 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:51:15.054944 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:51:15.057079 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:51:15.057131 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:51:15.059804 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:51:15.067241 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:51:15.072626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:51:15.074184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:51:15.077313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:51:15.083883 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:51:15.085506 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:51:15.087653 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:51:15.089003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:51:15.099596 systemd-journald[1124]: Time spent on flushing to /var/log/journal/2a14975738174560846f86052e67eb06 is 14.698ms for 993 entries. Oct 8 19:51:15.099596 systemd-journald[1124]: System Journal (/var/log/journal/2a14975738174560846f86052e67eb06) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:51:15.342736 systemd-journald[1124]: Received client request to flush runtime journal. Oct 8 19:51:15.342833 kernel: loop0: detected capacity change from 0 to 205544 Oct 8 19:51:15.342872 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:51:15.342892 kernel: loop1: detected capacity change from 0 to 140768 Oct 8 19:51:15.098711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:51:15.103540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:51:15.135734 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:51:15.139434 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:51:15.141658 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:51:15.147379 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:51:15.149365 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:51:15.156302 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:51:15.172774 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:51:15.190108 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:51:15.246347 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Oct 8 19:51:15.246365 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Oct 8 19:51:15.256637 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:51:15.263296 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:51:15.339834 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:51:15.341798 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:51:15.377863 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:51:15.380150 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:51:15.382739 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:51:15.398773 kernel: loop2: detected capacity change from 0 to 142488 Oct 8 19:51:15.401234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:51:15.439365 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:51:15.440098 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:51:15.488068 kernel: loop3: detected capacity change from 0 to 205544 Oct 8 19:51:15.493328 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 8 19:51:15.493372 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 8 19:51:15.503051 kernel: loop4: detected capacity change from 0 to 140768 Oct 8 19:51:15.503595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:51:15.522188 kernel: loop5: detected capacity change from 0 to 142488 Oct 8 19:51:15.531076 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:51:15.531725 (sd-merge)[1194]: Merged extensions into '/usr'. Oct 8 19:51:15.540161 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:51:15.540179 systemd[1]: Reloading... Oct 8 19:51:15.640079 zram_generator::config[1222]: No configuration found. Oct 8 19:51:15.845072 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:51:15.858093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:51:15.915568 systemd[1]: Reloading finished in 374 ms. Oct 8 19:51:15.948596 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:51:15.950781 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:51:15.997338 systemd[1]: Starting ensure-sysext.service... Oct 8 19:51:15.999835 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:51:16.005853 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:51:16.005871 systemd[1]: Reloading... Oct 8 19:51:16.049465 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:51:16.049979 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:51:16.051318 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:51:16.051696 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Oct 8 19:51:16.051785 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Oct 8 19:51:16.059363 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:51:16.059385 systemd-tmpfiles[1259]: Skipping /boot Oct 8 19:51:16.137354 zram_generator::config[1295]: No configuration found. Oct 8 19:51:16.145484 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:51:16.145701 systemd-tmpfiles[1259]: Skipping /boot Oct 8 19:51:16.237591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:51:16.293356 systemd[1]: Reloading finished in 287 ms. Oct 8 19:51:16.324418 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:51:16.370360 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:51:16.525279 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:51:16.529328 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:51:16.533753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:51:16.537238 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:51:16.546025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:16.546213 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:51:16.559912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:51:16.566300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:51:16.568740 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:51:16.570058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:51:16.574244 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:51:16.575460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:16.576841 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:51:16.579548 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:51:16.581759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:51:16.581939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:51:16.583828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:51:16.584004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:51:16.586088 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:51:16.586288 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:51:16.586645 augenrules[1348]: No rules Oct 8 19:51:16.588094 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:51:16.604502 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:51:16.610095 systemd[1]: Finished ensure-sysext.service. Oct 8 19:51:16.612187 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:16.612508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:51:16.618227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:51:16.620732 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:51:16.624564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:51:16.631894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:51:16.633428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:51:16.638644 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:51:16.642266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:51:16.647256 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:51:16.648636 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:16.649319 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:51:16.651369 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:51:16.653555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:51:16.653808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:51:16.655871 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:51:16.656143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:51:16.657903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:51:16.658183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:51:16.660523 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:51:16.660794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:51:16.667825 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:51:16.675433 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:51:16.675513 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:51:16.675540 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:51:16.686379 systemd-udevd[1372]: Using default interface naming scheme 'v255'. Oct 8 19:51:16.698902 systemd-resolved[1334]: Positive Trust Anchors: Oct 8 19:51:16.698928 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:51:16.698966 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:51:16.704228 systemd-resolved[1334]: Defaulting to hostname 'linux'. Oct 8 19:51:16.706501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:51:16.707913 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:51:16.709856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:51:16.723280 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:51:16.752363 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:51:16.754304 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:51:16.793050 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1393) Oct 8 19:51:16.797349 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1393) Oct 8 19:51:16.806714 systemd-networkd[1387]: lo: Link UP Oct 8 19:51:16.807183 systemd-networkd[1387]: lo: Gained carrier Oct 8 19:51:16.809940 systemd-networkd[1387]: Enumeration completed Oct 8 19:51:16.810809 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:16.811023 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:51:16.811145 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:51:16.812788 systemd[1]: Reached target network.target - Network. Oct 8 19:51:16.813727 systemd-networkd[1387]: eth0: Link UP Oct 8 19:51:16.813733 systemd-networkd[1387]: eth0: Gained carrier Oct 8 19:51:16.813752 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:16.825414 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:51:16.827320 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:16.866340 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:51:16.867235 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:51:16.869108 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 8 19:51:17.611463 systemd-timesyncd[1370]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:51:17.611532 systemd-timesyncd[1370]: Initial clock synchronization to Tue 2024-10-08 19:51:17.611345 UTC. Oct 8 19:51:17.613557 systemd-resolved[1334]: Clock change detected. Flushing caches. Oct 8 19:51:17.617294 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1393) Oct 8 19:51:17.626341 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 8 19:51:17.636347 kernel: ACPI: button: Power Button [PWRF] Oct 8 19:51:17.653141 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 8 19:51:17.653574 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 19:51:17.653779 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 19:51:17.654026 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 19:51:17.666299 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 8 19:51:17.692631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:51:17.737156 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:51:17.799469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:17.802281 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:51:17.810285 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:51:17.839726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:51:17.840257 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:17.847834 kernel: kvm_amd: TSC scaling supported Oct 8 19:51:17.847897 kernel: kvm_amd: Nested Virtualization enabled Oct 8 19:51:17.847915 kernel: kvm_amd: Nested Paging enabled Oct 8 19:51:17.848361 kernel: kvm_amd: LBR virtualization supported Oct 8 19:51:17.849621 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 8 19:51:17.849667 kernel: kvm_amd: Virtual GIF supported Oct 8 19:51:17.865686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:17.877340 kernel: EDAC MC: Ver: 3.0.0 Oct 8 19:51:17.912702 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:51:17.921537 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:51:17.926701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:17.935034 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:51:17.979402 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:51:17.981100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:51:17.982289 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:51:17.983557 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:51:17.984934 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:51:17.986609 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:51:17.988040 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:51:17.989606 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:51:17.990965 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:51:17.990999 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:51:17.991966 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:51:17.993662 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:51:17.996757 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:51:18.007207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:51:18.010252 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:51:18.012060 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:51:18.013550 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:51:18.014569 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:51:18.015569 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:51:18.015597 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:51:18.016778 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:51:18.019008 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:51:18.022411 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:51:18.028884 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:51:18.031470 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:51:18.034360 jq[1435]: false Oct 8 19:51:18.035711 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:51:18.036605 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:51:18.040060 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:51:18.045723 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:51:18.061378 extend-filesystems[1436]: Found loop3 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found loop4 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found loop5 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found sr0 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found vda Oct 8 19:51:18.061378 extend-filesystems[1436]: Found vda1 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found vda2 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found vda3 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found usr Oct 8 19:51:18.061378 extend-filesystems[1436]: Found vda4 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found vda6 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found vda7 Oct 8 19:51:18.061378 extend-filesystems[1436]: Found vda9 Oct 8 19:51:18.061378 extend-filesystems[1436]: Checking size of /dev/vda9 Oct 8 19:51:18.113914 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:51:18.051597 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:51:18.114214 extend-filesystems[1436]: Resized partition /dev/vda9 Oct 8 19:51:18.117003 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1385) Oct 8 19:51:18.076029 dbus-daemon[1434]: [system] SELinux support is enabled Oct 8 19:51:18.070528 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:51:18.123429 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:51:18.073983 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:51:18.074839 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:51:18.076005 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:51:18.129846 update_engine[1450]: I20241008 19:51:18.096227 1450 main.cc:92] Flatcar Update Engine starting Oct 8 19:51:18.129846 update_engine[1450]: I20241008 19:51:18.103621 1450 update_check_scheduler.cc:74] Next update check in 10m16s Oct 8 19:51:18.080431 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:51:18.086419 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:51:18.130524 jq[1451]: true Oct 8 19:51:18.090449 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:51:18.101171 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:51:18.101518 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:51:18.102012 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:51:18.102927 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:51:18.118296 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:51:18.118633 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:51:18.143965 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:51:18.152761 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:51:18.152847 jq[1461]: true Oct 8 19:51:18.177313 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:51:18.177313 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:51:18.177313 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:51:18.185316 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Oct 8 19:51:18.179978 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 19:51:18.191119 tar[1459]: linux-amd64/helm Oct 8 19:51:18.180001 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 19:51:18.182660 systemd-logind[1443]: New seat seat0. Oct 8 19:51:18.183720 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:51:18.184510 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:51:18.200897 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:51:18.205064 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:51:18.216041 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:51:18.216329 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:51:18.218194 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:51:18.218371 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:51:18.232908 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:51:18.285397 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:51:18.290381 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:51:18.295721 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:51:18.306383 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:51:18.374407 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:51:18.414547 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:51:18.481094 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:51:18.493494 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:51:18.493864 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:51:18.502676 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:51:18.599212 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:51:18.678763 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:51:18.684533 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:51:18.686599 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:51:18.764955 systemd-networkd[1387]: eth0: Gained IPv6LL Oct 8 19:51:18.770530 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:51:18.773983 containerd[1462]: time="2024-10-08T19:51:18.773877511Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:51:18.776505 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:51:18.784541 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:51:18.790079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:18.795434 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:51:18.805329 containerd[1462]: time="2024-10-08T19:51:18.805216262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:18.840788 containerd[1462]: time="2024-10-08T19:51:18.840706279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:18.840788 containerd[1462]: time="2024-10-08T19:51:18.840768596Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:51:18.840788 containerd[1462]: time="2024-10-08T19:51:18.840798372Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:51:18.841051 containerd[1462]: time="2024-10-08T19:51:18.841020438Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:51:18.841051 containerd[1462]: time="2024-10-08T19:51:18.841048631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:18.841172 containerd[1462]: time="2024-10-08T19:51:18.841144982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:18.841172 containerd[1462]: time="2024-10-08T19:51:18.841166823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:18.841450 containerd[1462]: time="2024-10-08T19:51:18.841418986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:18.841450 containerd[1462]: time="2024-10-08T19:51:18.841441187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:18.841539 containerd[1462]: time="2024-10-08T19:51:18.841458159Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:18.841539 containerd[1462]: time="2024-10-08T19:51:18.841479599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:18.841643 containerd[1462]: time="2024-10-08T19:51:18.841609182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:18.841944 containerd[1462]: time="2024-10-08T19:51:18.841914966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:18.842752 containerd[1462]: time="2024-10-08T19:51:18.842048767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:18.842752 containerd[1462]: time="2024-10-08T19:51:18.842068934Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:51:18.842752 containerd[1462]: time="2024-10-08T19:51:18.842181956Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:51:18.842752 containerd[1462]: time="2024-10-08T19:51:18.842250284Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:51:18.856086 containerd[1462]: time="2024-10-08T19:51:18.855734652Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:51:18.856086 containerd[1462]: time="2024-10-08T19:51:18.855819462Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:51:18.856086 containerd[1462]: time="2024-10-08T19:51:18.855838267Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:51:18.856086 containerd[1462]: time="2024-10-08T19:51:18.855857052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:51:18.856086 containerd[1462]: time="2024-10-08T19:51:18.855875817Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:51:18.856086 containerd[1462]: time="2024-10-08T19:51:18.856051757Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.856830918Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857037506Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857054127Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857067662Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857081939Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857101396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857118838Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857133185Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857148133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857164294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857176797Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857191204Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857220890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.857621 containerd[1462]: time="2024-10-08T19:51:18.857236409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857253721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857293245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857308033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857324123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857345914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857365591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857380930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857401539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857413912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857425534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857439790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857454638Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857490806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857502227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.858007 containerd[1462]: time="2024-10-08T19:51:18.857516464Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:51:18.862097 containerd[1462]: time="2024-10-08T19:51:18.860383801Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:51:18.862097 containerd[1462]: time="2024-10-08T19:51:18.860446709Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:51:18.862097 containerd[1462]: time="2024-10-08T19:51:18.860468640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:51:18.862097 containerd[1462]: time="2024-10-08T19:51:18.860482627Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:51:18.862097 containerd[1462]: time="2024-10-08T19:51:18.860500630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.862097 containerd[1462]: time="2024-10-08T19:51:18.860516540Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:51:18.862097 containerd[1462]: time="2024-10-08T19:51:18.860548971Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:51:18.862097 containerd[1462]: time="2024-10-08T19:51:18.860561004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:51:18.862388 containerd[1462]: time="2024-10-08T19:51:18.860944142Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:51:18.862388 containerd[1462]: time="2024-10-08T19:51:18.861035594Z" level=info msg="Connect containerd service" Oct 8 19:51:18.862388 containerd[1462]: time="2024-10-08T19:51:18.861119872Z" level=info msg="using legacy CRI server" Oct 8 19:51:18.862388 containerd[1462]: time="2024-10-08T19:51:18.861129289Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:51:18.864328 containerd[1462]: time="2024-10-08T19:51:18.864147289Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:51:18.866941 containerd[1462]: time="2024-10-08T19:51:18.865814045Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:51:18.866941 containerd[1462]: time="2024-10-08T19:51:18.866501073Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:51:18.866941 containerd[1462]: time="2024-10-08T19:51:18.866587696Z" level=info msg="Start subscribing containerd event" Oct 8 19:51:18.866941 containerd[1462]: time="2024-10-08T19:51:18.866684698Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:51:18.866941 containerd[1462]: time="2024-10-08T19:51:18.866688996Z" level=info msg="Start recovering state" Oct 8 19:51:18.866941 containerd[1462]: time="2024-10-08T19:51:18.866868923Z" level=info msg="Start event monitor" Oct 8 19:51:18.866941 containerd[1462]: time="2024-10-08T19:51:18.866895834Z" level=info msg="Start snapshots syncer" Oct 8 19:51:18.866941 containerd[1462]: time="2024-10-08T19:51:18.866914148Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:51:18.866811 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:51:18.867423 containerd[1462]: time="2024-10-08T19:51:18.866953041Z" level=info msg="Start streaming server" Oct 8 19:51:18.867423 containerd[1462]: time="2024-10-08T19:51:18.867058799Z" level=info msg="containerd successfully booted in 0.096389s" Oct 8 19:51:18.869455 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:51:18.871781 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:51:18.872054 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:51:18.876248 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:51:18.946090 tar[1459]: linux-amd64/LICENSE Oct 8 19:51:18.946090 tar[1459]: linux-amd64/README.md Oct 8 19:51:18.961313 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:51:20.231433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:20.233451 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:51:20.235459 systemd[1]: Startup finished in 1.310s (kernel) + 6.473s (initrd) + 5.344s (userspace) = 13.128s. Oct 8 19:51:20.239038 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:20.946850 kubelet[1548]: E1008 19:51:20.946779 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:20.950900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:20.951164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:51:20.951601 systemd[1]: kubelet.service: Consumed 1.987s CPU time. Oct 8 19:51:21.385225 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:51:21.386888 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:34856.service - OpenSSH per-connection server daemon (10.0.0.1:34856). Oct 8 19:51:21.431240 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 34856 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:51:21.433791 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:21.445513 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:51:21.455678 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:51:21.457890 systemd-logind[1443]: New session 1 of user core. Oct 8 19:51:21.469992 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:51:21.473148 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:51:21.484140 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:21.620312 systemd[1565]: Queued start job for default target default.target. Oct 8 19:51:21.631944 systemd[1565]: Created slice app.slice - User Application Slice. Oct 8 19:51:21.631978 systemd[1565]: Reached target paths.target - Paths. Oct 8 19:51:21.631996 systemd[1565]: Reached target timers.target - Timers. Oct 8 19:51:21.633905 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:51:21.649417 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:51:21.649617 systemd[1565]: Reached target sockets.target - Sockets. Oct 8 19:51:21.649646 systemd[1565]: Reached target basic.target - Basic System. Oct 8 19:51:21.649720 systemd[1565]: Reached target default.target - Main User Target. Oct 8 19:51:21.649787 systemd[1565]: Startup finished in 156ms. Oct 8 19:51:21.650131 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:51:21.666581 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:51:21.730515 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:34860.service - OpenSSH per-connection server daemon (10.0.0.1:34860). Oct 8 19:51:21.780441 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 34860 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:51:21.782473 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:21.789640 systemd-logind[1443]: New session 2 of user core. Oct 8 19:51:21.806572 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:51:21.865844 sshd[1576]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:21.880678 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:34860.service: Deactivated successfully. Oct 8 19:51:21.882530 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:51:21.883987 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:51:21.897750 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:34872.service - OpenSSH per-connection server daemon (10.0.0.1:34872). Oct 8 19:51:21.898791 systemd-logind[1443]: Removed session 2. Oct 8 19:51:21.924898 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 34872 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:51:21.926902 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:21.932523 systemd-logind[1443]: New session 3 of user core. Oct 8 19:51:21.945537 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:51:22.000015 sshd[1583]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:22.013225 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:34872.service: Deactivated successfully. Oct 8 19:51:22.016712 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:51:22.019418 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:51:22.030782 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:34886.service - OpenSSH per-connection server daemon (10.0.0.1:34886). Oct 8 19:51:22.032241 systemd-logind[1443]: Removed session 3. Oct 8 19:51:22.063207 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 34886 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:51:22.064972 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:22.069646 systemd-logind[1443]: New session 4 of user core. Oct 8 19:51:22.079558 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:51:22.141332 sshd[1591]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:22.154045 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:34886.service: Deactivated successfully. Oct 8 19:51:22.156705 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:51:22.159856 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:51:22.169639 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:34900.service - OpenSSH per-connection server daemon (10.0.0.1:34900). Oct 8 19:51:22.171131 systemd-logind[1443]: Removed session 4. Oct 8 19:51:22.203066 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 34900 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:51:22.205030 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:22.210773 systemd-logind[1443]: New session 5 of user core. Oct 8 19:51:22.219601 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:51:22.288872 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:51:22.289342 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:51:22.307884 sudo[1601]: pam_unix(sudo:session): session closed for user root Oct 8 19:51:22.310806 sshd[1598]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:22.324662 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:34900.service: Deactivated successfully. Oct 8 19:51:22.327163 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:51:22.329215 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:51:22.339927 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:34910.service - OpenSSH per-connection server daemon (10.0.0.1:34910). Oct 8 19:51:22.341217 systemd-logind[1443]: Removed session 5. Oct 8 19:51:22.373518 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 34910 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:51:22.375555 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:22.380246 systemd-logind[1443]: New session 6 of user core. Oct 8 19:51:22.390443 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:51:22.448519 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:51:22.448903 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:51:22.453379 sudo[1610]: pam_unix(sudo:session): session closed for user root Oct 8 19:51:22.461010 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:51:22.461390 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:51:22.480729 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:51:22.483391 auditctl[1613]: No rules Oct 8 19:51:22.484903 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:51:22.485197 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:51:22.487450 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:51:22.528378 augenrules[1631]: No rules Oct 8 19:51:22.530880 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:51:22.532339 sudo[1609]: pam_unix(sudo:session): session closed for user root Oct 8 19:51:22.534789 sshd[1606]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:22.547288 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:34910.service: Deactivated successfully. Oct 8 19:51:22.549868 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:51:22.552611 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:51:22.562804 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:34922.service - OpenSSH per-connection server daemon (10.0.0.1:34922). Oct 8 19:51:22.564352 systemd-logind[1443]: Removed session 6. Oct 8 19:51:22.596413 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 34922 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:51:22.598832 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:22.605860 systemd-logind[1443]: New session 7 of user core. Oct 8 19:51:22.621614 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:51:22.679963 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:51:22.680446 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:51:23.273980 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:51:23.274194 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:51:24.112355 dockerd[1660]: time="2024-10-08T19:51:24.112208487Z" level=info msg="Starting up" Oct 8 19:51:26.118772 dockerd[1660]: time="2024-10-08T19:51:26.118661222Z" level=info msg="Loading containers: start." Oct 8 19:51:26.363308 kernel: Initializing XFRM netlink socket Oct 8 19:51:26.462344 systemd-networkd[1387]: docker0: Link UP Oct 8 19:51:26.648031 dockerd[1660]: time="2024-10-08T19:51:26.647965888Z" level=info msg="Loading containers: done." Oct 8 19:51:26.669403 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1641581502-merged.mount: Deactivated successfully. Oct 8 19:51:26.712440 dockerd[1660]: time="2024-10-08T19:51:26.712335003Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:51:26.712725 dockerd[1660]: time="2024-10-08T19:51:26.712587616Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:51:26.712830 dockerd[1660]: time="2024-10-08T19:51:26.712790757Z" level=info msg="Daemon has completed initialization" Oct 8 19:51:26.776594 dockerd[1660]: time="2024-10-08T19:51:26.776470700Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:51:26.776875 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:51:27.427592 containerd[1462]: time="2024-10-08T19:51:27.427519559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 19:51:28.313298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437269199.mount: Deactivated successfully. Oct 8 19:51:31.093424 containerd[1462]: time="2024-10-08T19:51:31.093337045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:31.094420 containerd[1462]: time="2024-10-08T19:51:31.094306934Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066621" Oct 8 19:51:31.095628 containerd[1462]: time="2024-10-08T19:51:31.095593337Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:31.099726 containerd[1462]: time="2024-10-08T19:51:31.099686723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:31.101249 containerd[1462]: time="2024-10-08T19:51:31.101204950Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 3.673608537s" Oct 8 19:51:31.101311 containerd[1462]: time="2024-10-08T19:51:31.101289940Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 8 19:51:31.103996 containerd[1462]: time="2024-10-08T19:51:31.103969825Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 19:51:31.201621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:51:31.214603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:31.465690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:31.474866 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:31.623467 kubelet[1869]: E1008 19:51:31.623407 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:31.631148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:31.631470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:51:34.218444 containerd[1462]: time="2024-10-08T19:51:34.218367433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:34.284809 containerd[1462]: time="2024-10-08T19:51:34.284691895Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690922" Oct 8 19:51:34.367653 containerd[1462]: time="2024-10-08T19:51:34.367575671Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:34.413217 containerd[1462]: time="2024-10-08T19:51:34.413139871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:34.414716 containerd[1462]: time="2024-10-08T19:51:34.414640024Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 3.310630815s" Oct 8 19:51:34.414799 containerd[1462]: time="2024-10-08T19:51:34.414720475Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 8 19:51:34.415484 containerd[1462]: time="2024-10-08T19:51:34.415428422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 19:51:37.137795 containerd[1462]: time="2024-10-08T19:51:37.137672261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:37.194059 containerd[1462]: time="2024-10-08T19:51:37.193927430Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646758" Oct 8 19:51:37.258905 containerd[1462]: time="2024-10-08T19:51:37.258821089Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:37.321861 containerd[1462]: time="2024-10-08T19:51:37.321775230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:37.323094 containerd[1462]: time="2024-10-08T19:51:37.323060160Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 2.907588185s" Oct 8 19:51:37.323182 containerd[1462]: time="2024-10-08T19:51:37.323100756Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 8 19:51:37.323752 containerd[1462]: time="2024-10-08T19:51:37.323693968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 19:51:41.278886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3936800246.mount: Deactivated successfully. Oct 8 19:51:41.855237 containerd[1462]: time="2024-10-08T19:51:41.855146079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:41.856001 containerd[1462]: time="2024-10-08T19:51:41.855951469Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208881" Oct 8 19:51:41.857221 containerd[1462]: time="2024-10-08T19:51:41.857183630Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:41.859751 containerd[1462]: time="2024-10-08T19:51:41.859721579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:41.860661 containerd[1462]: time="2024-10-08T19:51:41.860512172Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 4.536775173s" Oct 8 19:51:41.860705 containerd[1462]: time="2024-10-08T19:51:41.860666461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 8 19:51:41.861195 containerd[1462]: time="2024-10-08T19:51:41.861171589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:51:41.881667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:51:41.895614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:42.070868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:42.076150 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:42.668415 kubelet[1902]: E1008 19:51:42.668245 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:42.673188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:42.673433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:51:44.465435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141527433.mount: Deactivated successfully. Oct 8 19:51:46.761977 containerd[1462]: time="2024-10-08T19:51:46.761878503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:46.880305 containerd[1462]: time="2024-10-08T19:51:46.880171505Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 19:51:46.977922 containerd[1462]: time="2024-10-08T19:51:46.977829759Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:47.114647 containerd[1462]: time="2024-10-08T19:51:47.114429302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:47.116196 containerd[1462]: time="2024-10-08T19:51:47.116130122Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 5.254922065s" Oct 8 19:51:47.116300 containerd[1462]: time="2024-10-08T19:51:47.116200103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 19:51:47.116967 containerd[1462]: time="2024-10-08T19:51:47.116937886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 19:51:49.470576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2480588501.mount: Deactivated successfully. Oct 8 19:51:49.944166 containerd[1462]: time="2024-10-08T19:51:49.944082612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:50.011445 containerd[1462]: time="2024-10-08T19:51:50.011301199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 8 19:51:50.069692 containerd[1462]: time="2024-10-08T19:51:50.069574316Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:50.174115 containerd[1462]: time="2024-10-08T19:51:50.174030035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:50.175131 containerd[1462]: time="2024-10-08T19:51:50.175070749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.058089771s" Oct 8 19:51:50.175189 containerd[1462]: time="2024-10-08T19:51:50.175136646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 8 19:51:50.175808 containerd[1462]: time="2024-10-08T19:51:50.175753725Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 19:51:52.085116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166766530.mount: Deactivated successfully. Oct 8 19:51:52.751761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:51:52.765607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:52.925588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:52.930240 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:53.008127 kubelet[1979]: E1008 19:51:53.007944 1979 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:53.012762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:53.013001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:01.531723 containerd[1462]: time="2024-10-08T19:52:01.531613411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:01.540404 containerd[1462]: time="2024-10-08T19:52:01.540257895Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241740" Oct 8 19:52:01.545511 containerd[1462]: time="2024-10-08T19:52:01.545331975Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:01.558362 containerd[1462]: time="2024-10-08T19:52:01.558244670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:01.560656 containerd[1462]: time="2024-10-08T19:52:01.560526275Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 11.384736851s" Oct 8 19:52:01.560656 containerd[1462]: time="2024-10-08T19:52:01.560629610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 8 19:52:03.125527 update_engine[1450]: I20241008 19:52:03.125377 1450 update_attempter.cc:509] Updating boot flags... Oct 8 19:52:03.157422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 19:52:03.168512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:03.201304 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2060) Oct 8 19:52:03.251321 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2061) Oct 8 19:52:03.315327 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2061) Oct 8 19:52:03.418884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:03.426607 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:03.526353 kubelet[2074]: E1008 19:52:03.526228 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:03.531008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:03.531281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:04.267496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:04.283620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:04.317103 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-7.scope)... Oct 8 19:52:04.317125 systemd[1]: Reloading... Oct 8 19:52:04.416112 zram_generator::config[2133]: No configuration found. Oct 8 19:52:07.071050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:52:07.148896 systemd[1]: Reloading finished in 2831 ms. Oct 8 19:52:07.205584 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:52:07.205699 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:52:07.205983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:07.207917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:07.841699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:07.855946 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:52:07.961616 kubelet[2177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:52:07.961616 kubelet[2177]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:52:07.961616 kubelet[2177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:52:07.962122 kubelet[2177]: I1008 19:52:07.961680 2177 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:52:08.215106 kubelet[2177]: I1008 19:52:08.214951 2177 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:52:08.215106 kubelet[2177]: I1008 19:52:08.214997 2177 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:52:08.215300 kubelet[2177]: I1008 19:52:08.215281 2177 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:52:08.272722 kubelet[2177]: I1008 19:52:08.272637 2177 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:52:08.287503 kubelet[2177]: E1008 19:52:08.287434 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:08.302238 kubelet[2177]: E1008 19:52:08.302171 2177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:52:08.302238 kubelet[2177]: I1008 19:52:08.302212 2177 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:52:08.328042 kubelet[2177]: I1008 19:52:08.327985 2177 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:52:08.328205 kubelet[2177]: I1008 19:52:08.328158 2177 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:52:08.328447 kubelet[2177]: I1008 19:52:08.328395 2177 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:52:08.328668 kubelet[2177]: I1008 19:52:08.328437 2177 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:52:08.328800 kubelet[2177]: I1008 19:52:08.328684 2177 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:52:08.328800 kubelet[2177]: I1008 19:52:08.328698 2177 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:52:08.328880 kubelet[2177]: I1008 19:52:08.328865 2177 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:52:08.330923 kubelet[2177]: I1008 19:52:08.330873 2177 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:52:08.330923 kubelet[2177]: I1008 19:52:08.330902 2177 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:52:08.330923 kubelet[2177]: I1008 19:52:08.330939 2177 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:52:08.331145 kubelet[2177]: I1008 19:52:08.330959 2177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:52:08.344980 kubelet[2177]: W1008 19:52:08.344902 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:08.345046 kubelet[2177]: E1008 19:52:08.344985 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:08.346887 kubelet[2177]: W1008 19:52:08.346792 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:08.346887 kubelet[2177]: E1008 19:52:08.346878 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:08.349212 kubelet[2177]: I1008 19:52:08.349190 2177 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:52:08.355990 kubelet[2177]: I1008 19:52:08.355955 2177 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:52:08.356065 kubelet[2177]: W1008 19:52:08.356055 2177 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:52:08.356830 kubelet[2177]: I1008 19:52:08.356800 2177 server.go:1269] "Started kubelet" Oct 8 19:52:08.357675 kubelet[2177]: I1008 19:52:08.357327 2177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:52:08.357833 kubelet[2177]: I1008 19:52:08.357816 2177 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:52:08.357939 kubelet[2177]: I1008 19:52:08.357905 2177 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:52:08.358490 kubelet[2177]: I1008 19:52:08.358464 2177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:52:08.359000 kubelet[2177]: I1008 19:52:08.358971 2177 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:52:08.360106 kubelet[2177]: I1008 19:52:08.360083 2177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:52:08.363556 kubelet[2177]: E1008 19:52:08.363538 2177 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:52:08.364537 kubelet[2177]: I1008 19:52:08.363783 2177 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:52:08.364537 kubelet[2177]: I1008 19:52:08.363853 2177 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:52:08.364537 kubelet[2177]: I1008 19:52:08.363902 2177 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:52:08.364537 kubelet[2177]: W1008 19:52:08.364144 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:08.364537 kubelet[2177]: E1008 19:52:08.364179 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:08.364537 kubelet[2177]: E1008 19:52:08.364412 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:08.364537 kubelet[2177]: E1008 19:52:08.364463 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="200ms" Oct 8 19:52:08.364793 kubelet[2177]: I1008 19:52:08.364644 2177 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:52:08.364793 kubelet[2177]: I1008 19:52:08.364723 2177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:52:08.365452 kubelet[2177]: I1008 19:52:08.365429 2177 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:52:08.434012 kubelet[2177]: I1008 19:52:08.433967 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:52:08.435376 kubelet[2177]: I1008 19:52:08.435352 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:52:08.435432 kubelet[2177]: I1008 19:52:08.435391 2177 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:52:08.435432 kubelet[2177]: I1008 19:52:08.435407 2177 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:52:08.435480 kubelet[2177]: E1008 19:52:08.435448 2177 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:52:08.451640 kubelet[2177]: W1008 19:52:08.451552 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:08.451746 kubelet[2177]: E1008 19:52:08.451649 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:08.452303 kubelet[2177]: I1008 19:52:08.452285 2177 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:52:08.452303 kubelet[2177]: I1008 19:52:08.452301 2177 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:52:08.452441 kubelet[2177]: I1008 19:52:08.452329 2177 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:52:08.453297 kubelet[2177]: E1008 19:52:08.451121 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc92387f67f803 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:08.356771843 +0000 UTC m=+0.496051118,LastTimestamp:2024-10-08 19:52:08.356771843 +0000 UTC m=+0.496051118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:08.464698 kubelet[2177]: E1008 19:52:08.464677 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:08.536062 kubelet[2177]: E1008 19:52:08.535995 2177 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:52:08.565438 kubelet[2177]: E1008 19:52:08.565365 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:08.565812 kubelet[2177]: E1008 19:52:08.565766 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="400ms" Oct 8 19:52:08.666242 kubelet[2177]: E1008 19:52:08.666130 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:08.736549 kubelet[2177]: E1008 19:52:08.736430 2177 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:52:08.766891 kubelet[2177]: E1008 19:52:08.766794 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:08.868114 kubelet[2177]: E1008 19:52:08.867936 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:08.967057 kubelet[2177]: E1008 19:52:08.966987 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="800ms" Oct 8 19:52:08.969133 kubelet[2177]: E1008 19:52:08.969091 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:09.069697 kubelet[2177]: E1008 19:52:09.069627 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:09.137010 kubelet[2177]: E1008 19:52:09.136817 2177 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:52:09.170231 kubelet[2177]: E1008 19:52:09.170192 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:09.191945 kubelet[2177]: W1008 19:52:09.191869 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:09.191945 kubelet[2177]: E1008 19:52:09.191940 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:09.270776 kubelet[2177]: E1008 19:52:09.270696 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:09.346528 kubelet[2177]: I1008 19:52:09.346448 2177 policy_none.go:49] "None policy: Start" Oct 8 19:52:09.347570 kubelet[2177]: I1008 19:52:09.347546 2177 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:52:09.347644 kubelet[2177]: I1008 19:52:09.347581 2177 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:52:09.359942 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:52:09.371420 kubelet[2177]: E1008 19:52:09.371338 2177 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:09.382178 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:52:09.387852 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:52:09.399763 kubelet[2177]: I1008 19:52:09.399721 2177 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:52:09.400168 kubelet[2177]: I1008 19:52:09.400031 2177 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:52:09.400168 kubelet[2177]: I1008 19:52:09.400054 2177 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:52:09.400387 kubelet[2177]: I1008 19:52:09.400370 2177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:52:09.401495 kubelet[2177]: E1008 19:52:09.401463 2177 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:52:09.502279 kubelet[2177]: I1008 19:52:09.502164 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:52:09.502756 kubelet[2177]: E1008 19:52:09.502721 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Oct 8 19:52:09.620173 kubelet[2177]: W1008 19:52:09.620062 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:09.620173 kubelet[2177]: E1008 19:52:09.620164 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:09.677405 kubelet[2177]: W1008 19:52:09.677116 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:09.677405 kubelet[2177]: E1008 19:52:09.677188 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:09.705572 kubelet[2177]: I1008 19:52:09.705525 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:52:09.706082 kubelet[2177]: E1008 19:52:09.706029 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Oct 8 19:52:09.768229 kubelet[2177]: E1008 19:52:09.768143 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="1.6s" Oct 8 19:52:09.830009 kubelet[2177]: W1008 19:52:09.829759 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:09.830009 kubelet[2177]: E1008 19:52:09.829871 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:09.950341 systemd[1]: Created slice kubepods-burstable-pod950d1e64274d7725b71235a996e8735b.slice - libcontainer container kubepods-burstable-pod950d1e64274d7725b71235a996e8735b.slice. Oct 8 19:52:09.973117 kubelet[2177]: I1008 19:52:09.973056 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/950d1e64274d7725b71235a996e8735b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"950d1e64274d7725b71235a996e8735b\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:09.973117 kubelet[2177]: I1008 19:52:09.973101 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:09.973585 kubelet[2177]: I1008 19:52:09.973128 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:09.973585 kubelet[2177]: I1008 19:52:09.973153 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:09.973585 kubelet[2177]: I1008 19:52:09.973180 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:09.973585 kubelet[2177]: I1008 19:52:09.973259 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/950d1e64274d7725b71235a996e8735b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"950d1e64274d7725b71235a996e8735b\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:09.973585 kubelet[2177]: I1008 19:52:09.973303 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/950d1e64274d7725b71235a996e8735b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"950d1e64274d7725b71235a996e8735b\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:09.973366 systemd[1]: Created slice kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice - libcontainer container kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice. Oct 8 19:52:09.973785 kubelet[2177]: I1008 19:52:09.973328 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:09.973785 kubelet[2177]: I1008 19:52:09.973362 2177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:52:09.990481 systemd[1]: Created slice kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice - libcontainer container kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice. Oct 8 19:52:10.108361 kubelet[2177]: I1008 19:52:10.108306 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:52:10.108759 kubelet[2177]: E1008 19:52:10.108715 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Oct 8 19:52:10.270085 kubelet[2177]: E1008 19:52:10.270013 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:10.271066 containerd[1462]: time="2024-10-08T19:52:10.270988136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:950d1e64274d7725b71235a996e8735b,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:10.286743 kubelet[2177]: E1008 19:52:10.286667 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:10.287444 containerd[1462]: time="2024-10-08T19:52:10.287370975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:10.294002 kubelet[2177]: E1008 19:52:10.293929 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:10.294772 containerd[1462]: time="2024-10-08T19:52:10.294721090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:10.355141 kubelet[2177]: E1008 19:52:10.355081 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:10.910599 kubelet[2177]: I1008 19:52:10.910528 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:52:10.910876 kubelet[2177]: E1008 19:52:10.910851 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Oct 8 19:52:11.158762 kubelet[2177]: W1008 19:52:11.158697 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:11.158762 kubelet[2177]: E1008 19:52:11.158757 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:11.351239 kubelet[2177]: W1008 19:52:11.351160 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:11.351239 kubelet[2177]: E1008 19:52:11.351214 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:11.368526 kubelet[2177]: E1008 19:52:11.368489 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="3.2s" Oct 8 19:52:11.505425 kubelet[2177]: W1008 19:52:11.505373 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:11.505425 kubelet[2177]: E1008 19:52:11.505422 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:11.990473 kubelet[2177]: W1008 19:52:11.990400 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:11.990473 kubelet[2177]: E1008 19:52:11.990453 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:12.512564 kubelet[2177]: I1008 19:52:12.512517 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:52:12.513091 kubelet[2177]: E1008 19:52:12.512988 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Oct 8 19:52:13.710133 kubelet[2177]: E1008 19:52:13.709993 2177 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc92387f67f803 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:08.356771843 +0000 UTC m=+0.496051118,LastTimestamp:2024-10-08 19:52:08.356771843 +0000 UTC m=+0.496051118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:14.423338 kubelet[2177]: E1008 19:52:14.423248 2177 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:14.569472 kubelet[2177]: E1008 19:52:14.569393 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="6.4s" Oct 8 19:52:14.797258 kubelet[2177]: W1008 19:52:14.797183 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:14.797258 kubelet[2177]: E1008 19:52:14.797241 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:15.691872 kubelet[2177]: W1008 19:52:15.691800 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:15.691872 kubelet[2177]: E1008 19:52:15.691868 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:15.714617 kubelet[2177]: I1008 19:52:15.714571 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:52:15.715004 kubelet[2177]: E1008 19:52:15.714961 2177 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Oct 8 19:52:16.656839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425391040.mount: Deactivated successfully. Oct 8 19:52:16.940134 containerd[1462]: time="2024-10-08T19:52:16.939910928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:17.026510 kubelet[2177]: W1008 19:52:17.026454 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:17.026510 kubelet[2177]: E1008 19:52:17.026514 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:17.044438 containerd[1462]: time="2024-10-08T19:52:17.044312773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:52:17.180691 containerd[1462]: time="2024-10-08T19:52:17.180628207Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:17.287408 containerd[1462]: time="2024-10-08T19:52:17.287327891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:17.324488 kubelet[2177]: W1008 19:52:17.324410 2177 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Oct 8 19:52:17.324488 kubelet[2177]: E1008 19:52:17.324464 2177 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:52:17.417754 containerd[1462]: time="2024-10-08T19:52:17.417670021Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:17.454975 containerd[1462]: time="2024-10-08T19:52:17.454844742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:52:17.529159 containerd[1462]: time="2024-10-08T19:52:17.529059502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 19:52:17.614533 containerd[1462]: time="2024-10-08T19:52:17.614223180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:17.616014 containerd[1462]: time="2024-10-08T19:52:17.615927170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 7.321097516s" Oct 8 19:52:17.617916 containerd[1462]: time="2024-10-08T19:52:17.617863940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 7.346773471s" Oct 8 19:52:17.723914 containerd[1462]: time="2024-10-08T19:52:17.723833018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 7.436372644s" Oct 8 19:52:19.016363 containerd[1462]: time="2024-10-08T19:52:19.016215864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:19.016363 containerd[1462]: time="2024-10-08T19:52:19.016294883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:19.016363 containerd[1462]: time="2024-10-08T19:52:19.016322254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:19.017334 containerd[1462]: time="2024-10-08T19:52:19.016453992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:19.018940 containerd[1462]: time="2024-10-08T19:52:19.018860244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:19.019306 containerd[1462]: time="2024-10-08T19:52:19.019125734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:19.019306 containerd[1462]: time="2024-10-08T19:52:19.019147424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:19.019306 containerd[1462]: time="2024-10-08T19:52:19.019242634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:19.057098 containerd[1462]: time="2024-10-08T19:52:19.056308781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:19.057098 containerd[1462]: time="2024-10-08T19:52:19.057021102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:19.057098 containerd[1462]: time="2024-10-08T19:52:19.057092376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:19.057379 containerd[1462]: time="2024-10-08T19:52:19.057242590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:19.075584 systemd[1]: Started cri-containerd-6229400f15efc43d6b72d53213c8e4d8c982c881eb97d2dd3514eef49caa0da1.scope - libcontainer container 6229400f15efc43d6b72d53213c8e4d8c982c881eb97d2dd3514eef49caa0da1. Oct 8 19:52:19.079751 systemd[1]: Started cri-containerd-23d9d392f713f8422f125f90b2749151832a191624ac36030ecb669c2817cf46.scope - libcontainer container 23d9d392f713f8422f125f90b2749151832a191624ac36030ecb669c2817cf46. Oct 8 19:52:19.092976 systemd[1]: Started cri-containerd-57b82266cae3ad2dde276308d3d4f9b4f169ee7ccfea89a542f596eae1b9d2e1.scope - libcontainer container 57b82266cae3ad2dde276308d3d4f9b4f169ee7ccfea89a542f596eae1b9d2e1. Oct 8 19:52:19.168855 containerd[1462]: time="2024-10-08T19:52:19.168803354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"23d9d392f713f8422f125f90b2749151832a191624ac36030ecb669c2817cf46\"" Oct 8 19:52:19.171937 kubelet[2177]: E1008 19:52:19.171895 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:19.177098 containerd[1462]: time="2024-10-08T19:52:19.176448691Z" level=info msg="CreateContainer within sandbox \"23d9d392f713f8422f125f90b2749151832a191624ac36030ecb669c2817cf46\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:52:19.178427 containerd[1462]: time="2024-10-08T19:52:19.178392732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:950d1e64274d7725b71235a996e8735b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6229400f15efc43d6b72d53213c8e4d8c982c881eb97d2dd3514eef49caa0da1\"" Oct 8 19:52:19.179159 kubelet[2177]: E1008 19:52:19.179132 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:19.179693 containerd[1462]: time="2024-10-08T19:52:19.179665028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"57b82266cae3ad2dde276308d3d4f9b4f169ee7ccfea89a542f596eae1b9d2e1\"" Oct 8 19:52:19.180465 kubelet[2177]: E1008 19:52:19.180421 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:19.181019 containerd[1462]: time="2024-10-08T19:52:19.180981356Z" level=info msg="CreateContainer within sandbox \"6229400f15efc43d6b72d53213c8e4d8c982c881eb97d2dd3514eef49caa0da1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:52:19.182055 containerd[1462]: time="2024-10-08T19:52:19.182022236Z" level=info msg="CreateContainer within sandbox \"57b82266cae3ad2dde276308d3d4f9b4f169ee7ccfea89a542f596eae1b9d2e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:52:19.401938 kubelet[2177]: E1008 19:52:19.401746 2177 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:52:19.885590 containerd[1462]: time="2024-10-08T19:52:19.885500645Z" level=info msg="CreateContainer within sandbox \"23d9d392f713f8422f125f90b2749151832a191624ac36030ecb669c2817cf46\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b979588d8598e0516eaa7d6d6a5916f9723bab9f80e3bc34c166b4dfd14309b\"" Oct 8 19:52:19.886530 containerd[1462]: time="2024-10-08T19:52:19.886494076Z" level=info msg="StartContainer for \"6b979588d8598e0516eaa7d6d6a5916f9723bab9f80e3bc34c166b4dfd14309b\"" Oct 8 19:52:19.895880 containerd[1462]: time="2024-10-08T19:52:19.895620712Z" level=info msg="CreateContainer within sandbox \"6229400f15efc43d6b72d53213c8e4d8c982c881eb97d2dd3514eef49caa0da1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3df6676d44139671097b6cb7e877788946e63fc31d7ab3fb0bd9c3bf526717f\"" Oct 8 19:52:19.896401 containerd[1462]: time="2024-10-08T19:52:19.896313908Z" level=info msg="StartContainer for \"a3df6676d44139671097b6cb7e877788946e63fc31d7ab3fb0bd9c3bf526717f\"" Oct 8 19:52:19.898528 containerd[1462]: time="2024-10-08T19:52:19.898472442Z" level=info msg="CreateContainer within sandbox \"57b82266cae3ad2dde276308d3d4f9b4f169ee7ccfea89a542f596eae1b9d2e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"578e42791a35fc91c7ad39b0381b791271a5faac8758b4f6a62de12030a1bd14\"" Oct 8 19:52:19.899224 containerd[1462]: time="2024-10-08T19:52:19.899160908Z" level=info msg="StartContainer for \"578e42791a35fc91c7ad39b0381b791271a5faac8758b4f6a62de12030a1bd14\"" Oct 8 19:52:19.927727 systemd[1]: Started cri-containerd-6b979588d8598e0516eaa7d6d6a5916f9723bab9f80e3bc34c166b4dfd14309b.scope - libcontainer container 6b979588d8598e0516eaa7d6d6a5916f9723bab9f80e3bc34c166b4dfd14309b. Oct 8 19:52:19.935656 systemd[1]: Started cri-containerd-a3df6676d44139671097b6cb7e877788946e63fc31d7ab3fb0bd9c3bf526717f.scope - libcontainer container a3df6676d44139671097b6cb7e877788946e63fc31d7ab3fb0bd9c3bf526717f. Oct 8 19:52:19.997510 systemd[1]: Started cri-containerd-578e42791a35fc91c7ad39b0381b791271a5faac8758b4f6a62de12030a1bd14.scope - libcontainer container 578e42791a35fc91c7ad39b0381b791271a5faac8758b4f6a62de12030a1bd14. Oct 8 19:52:20.343170 containerd[1462]: time="2024-10-08T19:52:20.343106517Z" level=info msg="StartContainer for \"578e42791a35fc91c7ad39b0381b791271a5faac8758b4f6a62de12030a1bd14\" returns successfully" Oct 8 19:52:20.344870 containerd[1462]: time="2024-10-08T19:52:20.344330281Z" level=info msg="StartContainer for \"6b979588d8598e0516eaa7d6d6a5916f9723bab9f80e3bc34c166b4dfd14309b\" returns successfully" Oct 8 19:52:20.344870 containerd[1462]: time="2024-10-08T19:52:20.344444406Z" level=info msg="StartContainer for \"a3df6676d44139671097b6cb7e877788946e63fc31d7ab3fb0bd9c3bf526717f\" returns successfully" Oct 8 19:52:20.476167 kubelet[2177]: E1008 19:52:20.475831 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:20.480767 kubelet[2177]: E1008 19:52:20.480397 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:20.481726 kubelet[2177]: E1008 19:52:20.481649 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:21.274828 kubelet[2177]: E1008 19:52:21.274782 2177 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:52:21.529105 kubelet[2177]: E1008 19:52:21.528964 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:21.529105 kubelet[2177]: E1008 19:52:21.529047 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:21.529105 kubelet[2177]: E1008 19:52:21.529058 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:22.117325 kubelet[2177]: I1008 19:52:22.117283 2177 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:52:22.529825 kubelet[2177]: E1008 19:52:22.529787 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:23.528044 kubelet[2177]: I1008 19:52:23.527984 2177 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 8 19:52:23.528044 kubelet[2177]: E1008 19:52:23.528019 2177 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 8 19:52:24.352235 kubelet[2177]: I1008 19:52:24.352164 2177 apiserver.go:52] "Watching apiserver" Oct 8 19:52:24.364905 kubelet[2177]: I1008 19:52:24.364849 2177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:52:24.476408 kubelet[2177]: E1008 19:52:24.476258 2177 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc92387f67f803 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:08.356771843 +0000 UTC m=+0.496051118,LastTimestamp:2024-10-08 19:52:08.356771843 +0000 UTC m=+0.496051118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:24.872179 kubelet[2177]: E1008 19:52:24.871795 2177 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc92387fcf18fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:08.363530494 +0000 UTC m=+0.502809769,LastTimestamp:2024-10-08 19:52:08.363530494 +0000 UTC m=+0.502809769,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:25.474115 kubelet[2177]: E1008 19:52:25.473961 2177 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc9238850a912c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:08.451313964 +0000 UTC m=+0.590593229,LastTimestamp:2024-10-08 19:52:08.451313964 +0000 UTC m=+0.590593229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:25.745483 kubelet[2177]: E1008 19:52:25.745325 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:26.537368 kubelet[2177]: E1008 19:52:26.537231 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:28.390295 kubelet[2177]: E1008 19:52:28.390195 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:28.540160 kubelet[2177]: E1008 19:52:28.540116 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:29.273644 kubelet[2177]: I1008 19:52:29.273550 2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.273526285 podStartE2EDuration="4.273526285s" podCreationTimestamp="2024-10-08 19:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:29.273509043 +0000 UTC m=+21.412788318" watchObservedRunningTime="2024-10-08 19:52:29.273526285 +0000 UTC m=+21.412805560" Oct 8 19:52:29.273844 kubelet[2177]: I1008 19:52:29.273703 2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.273698078 podStartE2EDuration="1.273698078s" podCreationTimestamp="2024-10-08 19:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:28.939709886 +0000 UTC m=+21.078989161" watchObservedRunningTime="2024-10-08 19:52:29.273698078 +0000 UTC m=+21.412977353" Oct 8 19:52:30.471864 kubelet[2177]: E1008 19:52:30.471799 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:30.543930 kubelet[2177]: E1008 19:52:30.543889 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:35.366762 systemd[1]: Reloading requested from client PID 2462 ('systemctl') (unit session-7.scope)... Oct 8 19:52:35.366781 systemd[1]: Reloading... Oct 8 19:52:35.458304 zram_generator::config[2501]: No configuration found. Oct 8 19:52:35.487170 kubelet[2177]: E1008 19:52:35.487112 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:35.499413 kubelet[2177]: I1008 19:52:35.499213 2177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.499194398 podStartE2EDuration="5.499194398s" podCreationTimestamp="2024-10-08 19:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:30.921052536 +0000 UTC m=+23.060331811" watchObservedRunningTime="2024-10-08 19:52:35.499194398 +0000 UTC m=+27.638473673" Oct 8 19:52:35.551253 kubelet[2177]: E1008 19:52:35.551197 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:35.585089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:52:35.693516 systemd[1]: Reloading finished in 326 ms. Oct 8 19:52:35.750855 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:35.768854 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:52:35.769213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:35.769297 systemd[1]: kubelet.service: Consumed 1.455s CPU time, 122.1M memory peak, 0B memory swap peak. Oct 8 19:52:35.776555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:35.949252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:35.966162 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:52:36.018718 kubelet[2546]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:52:36.018718 kubelet[2546]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:52:36.018718 kubelet[2546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:52:36.019162 kubelet[2546]: I1008 19:52:36.018799 2546 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:52:36.025187 kubelet[2546]: I1008 19:52:36.025148 2546 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:52:36.025187 kubelet[2546]: I1008 19:52:36.025179 2546 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:52:36.025615 kubelet[2546]: I1008 19:52:36.025587 2546 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:52:36.027629 kubelet[2546]: I1008 19:52:36.027599 2546 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:52:36.030872 kubelet[2546]: I1008 19:52:36.030827 2546 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:52:36.034323 kubelet[2546]: E1008 19:52:36.034295 2546 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:52:36.034323 kubelet[2546]: I1008 19:52:36.034322 2546 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:52:36.040876 kubelet[2546]: I1008 19:52:36.040856 2546 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:52:36.041027 kubelet[2546]: I1008 19:52:36.041001 2546 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:52:36.041230 kubelet[2546]: I1008 19:52:36.041172 2546 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:52:36.041444 kubelet[2546]: I1008 19:52:36.041206 2546 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:52:36.041444 kubelet[2546]: I1008 19:52:36.041436 2546 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:52:36.041569 kubelet[2546]: I1008 19:52:36.041448 2546 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:52:36.041569 kubelet[2546]: I1008 19:52:36.041489 2546 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:52:36.041635 kubelet[2546]: I1008 19:52:36.041617 2546 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:52:36.041635 kubelet[2546]: I1008 19:52:36.041633 2546 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:52:36.041688 kubelet[2546]: I1008 19:52:36.041668 2546 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:52:36.041688 kubelet[2546]: I1008 19:52:36.041685 2546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:52:36.042776 kubelet[2546]: I1008 19:52:36.042744 2546 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:52:36.043242 kubelet[2546]: I1008 19:52:36.043205 2546 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:52:36.043728 kubelet[2546]: I1008 19:52:36.043705 2546 server.go:1269] "Started kubelet" Oct 8 19:52:36.044969 kubelet[2546]: I1008 19:52:36.044207 2546 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:52:36.044969 kubelet[2546]: I1008 19:52:36.044302 2546 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:52:36.044969 kubelet[2546]: I1008 19:52:36.044631 2546 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:52:36.047825 kubelet[2546]: I1008 19:52:36.047796 2546 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:52:36.049880 kubelet[2546]: I1008 19:52:36.049865 2546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:52:36.051676 kubelet[2546]: I1008 19:52:36.051628 2546 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:52:36.057430 kubelet[2546]: I1008 19:52:36.057383 2546 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:52:36.057771 kubelet[2546]: I1008 19:52:36.057745 2546 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:52:36.059450 kubelet[2546]: E1008 19:52:36.058229 2546 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:52:36.059450 kubelet[2546]: I1008 19:52:36.058594 2546 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:52:36.060769 kubelet[2546]: I1008 19:52:36.060430 2546 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:52:36.060769 kubelet[2546]: I1008 19:52:36.060534 2546 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:52:36.063195 kubelet[2546]: I1008 19:52:36.062680 2546 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:52:36.068385 kubelet[2546]: I1008 19:52:36.068341 2546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:52:36.070530 kubelet[2546]: I1008 19:52:36.069980 2546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:52:36.070530 kubelet[2546]: I1008 19:52:36.070020 2546 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:52:36.070530 kubelet[2546]: I1008 19:52:36.070056 2546 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:52:36.070530 kubelet[2546]: E1008 19:52:36.070105 2546 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:52:36.111133 kubelet[2546]: I1008 19:52:36.111069 2546 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:52:36.111133 kubelet[2546]: I1008 19:52:36.111094 2546 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:52:36.111133 kubelet[2546]: I1008 19:52:36.111117 2546 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:52:36.111363 kubelet[2546]: I1008 19:52:36.111296 2546 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:52:36.111363 kubelet[2546]: I1008 19:52:36.111311 2546 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:52:36.111363 kubelet[2546]: I1008 19:52:36.111334 2546 policy_none.go:49] "None policy: Start" Oct 8 19:52:36.112004 kubelet[2546]: I1008 19:52:36.111975 2546 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:52:36.112075 kubelet[2546]: I1008 19:52:36.112008 2546 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:52:36.112162 kubelet[2546]: I1008 19:52:36.112142 2546 state_mem.go:75] "Updated machine memory state" Oct 8 19:52:36.117433 kubelet[2546]: I1008 19:52:36.117307 2546 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:52:36.117560 kubelet[2546]: I1008 19:52:36.117526 2546 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:52:36.117629 kubelet[2546]: I1008 19:52:36.117559 2546 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:52:36.117841 kubelet[2546]: I1008 19:52:36.117816 2546 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:52:36.224102 kubelet[2546]: I1008 19:52:36.223950 2546 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 8 19:52:36.260734 kubelet[2546]: I1008 19:52:36.260683 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:36.260734 kubelet[2546]: I1008 19:52:36.260727 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:36.260881 kubelet[2546]: I1008 19:52:36.260754 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:52:36.260881 kubelet[2546]: I1008 19:52:36.260774 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/950d1e64274d7725b71235a996e8735b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"950d1e64274d7725b71235a996e8735b\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:36.260881 kubelet[2546]: I1008 19:52:36.260794 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/950d1e64274d7725b71235a996e8735b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"950d1e64274d7725b71235a996e8735b\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:36.260881 kubelet[2546]: I1008 19:52:36.260814 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/950d1e64274d7725b71235a996e8735b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"950d1e64274d7725b71235a996e8735b\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:36.260881 kubelet[2546]: I1008 19:52:36.260836 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:36.260999 kubelet[2546]: I1008 19:52:36.260858 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:36.260999 kubelet[2546]: I1008 19:52:36.260878 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:36.388903 kubelet[2546]: E1008 19:52:36.388853 2546 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:36.389189 kubelet[2546]: E1008 19:52:36.389133 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:36.413911 kubelet[2546]: E1008 19:52:36.413846 2546 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:36.414072 kubelet[2546]: E1008 19:52:36.413938 2546 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 8 19:52:36.414072 kubelet[2546]: E1008 19:52:36.414038 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:36.414230 kubelet[2546]: E1008 19:52:36.414198 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:36.415602 sudo[2580]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 19:52:36.416388 kubelet[2546]: I1008 19:52:36.416341 2546 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Oct 8 19:52:36.416701 sudo[2580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 8 19:52:36.416968 kubelet[2546]: I1008 19:52:36.416942 2546 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 8 19:52:36.894113 sudo[2580]: pam_unix(sudo:session): session closed for user root Oct 8 19:52:37.042072 kubelet[2546]: I1008 19:52:37.042025 2546 apiserver.go:52] "Watching apiserver" Oct 8 19:52:37.058299 kubelet[2546]: I1008 19:52:37.058242 2546 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:52:37.088197 kubelet[2546]: E1008 19:52:37.088152 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:37.088962 kubelet[2546]: E1008 19:52:37.088415 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:37.088962 kubelet[2546]: E1008 19:52:37.088918 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:38.090023 kubelet[2546]: E1008 19:52:38.089966 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:38.483414 sudo[1642]: pam_unix(sudo:session): session closed for user root Oct 8 19:52:38.486259 sshd[1639]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:38.490761 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:34922.service: Deactivated successfully. Oct 8 19:52:38.493083 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:52:38.493367 systemd[1]: session-7.scope: Consumed 5.600s CPU time, 158.8M memory peak, 0B memory swap peak. Oct 8 19:52:38.493804 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:52:38.494793 systemd-logind[1443]: Removed session 7. Oct 8 19:52:40.495655 kubelet[2546]: E1008 19:52:40.495610 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:40.500122 kubelet[2546]: I1008 19:52:40.500085 2546 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:52:40.500429 containerd[1462]: time="2024-10-08T19:52:40.500388755Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:52:40.500847 kubelet[2546]: I1008 19:52:40.500594 2546 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:52:41.094428 kubelet[2546]: E1008 19:52:41.094351 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:41.443064 kubelet[2546]: E1008 19:52:41.442878 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:41.747323 systemd[1]: Created slice kubepods-burstable-pod8d0164cf_8490_40f3_9f63_c56d4d161565.slice - libcontainer container kubepods-burstable-pod8d0164cf_8490_40f3_9f63_c56d4d161565.slice. Oct 8 19:52:41.751684 systemd[1]: Created slice kubepods-besteffort-pod544e84c9_6bde_451d_a15b_ede760afcf38.slice - libcontainer container kubepods-besteffort-pod544e84c9_6bde_451d_a15b_ede760afcf38.slice. Oct 8 19:52:41.796780 systemd[1]: Created slice kubepods-besteffort-poda8cd8253_da9c_4cca_b85f_0457e4cc678e.slice - libcontainer container kubepods-besteffort-poda8cd8253_da9c_4cca_b85f_0457e4cc678e.slice. Oct 8 19:52:41.843124 kubelet[2546]: I1008 19:52:41.843054 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-host-proc-sys-kernel\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843124 kubelet[2546]: I1008 19:52:41.843092 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-etc-cni-netd\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843124 kubelet[2546]: I1008 19:52:41.843108 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d0164cf-8490-40f3-9f63-c56d4d161565-clustermesh-secrets\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843124 kubelet[2546]: I1008 19:52:41.843127 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/544e84c9-6bde-451d-a15b-ede760afcf38-lib-modules\") pod \"kube-proxy-wd4hr\" (UID: \"544e84c9-6bde-451d-a15b-ede760afcf38\") " pod="kube-system/kube-proxy-wd4hr" Oct 8 19:52:41.843718 kubelet[2546]: I1008 19:52:41.843163 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-bpf-maps\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843718 kubelet[2546]: I1008 19:52:41.843180 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-host-proc-sys-net\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843718 kubelet[2546]: I1008 19:52:41.843194 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndcr2\" (UniqueName: \"kubernetes.io/projected/544e84c9-6bde-451d-a15b-ede760afcf38-kube-api-access-ndcr2\") pod \"kube-proxy-wd4hr\" (UID: \"544e84c9-6bde-451d-a15b-ede760afcf38\") " pod="kube-system/kube-proxy-wd4hr" Oct 8 19:52:41.843718 kubelet[2546]: I1008 19:52:41.843210 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-cgroup\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843718 kubelet[2546]: I1008 19:52:41.843225 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-lib-modules\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843718 kubelet[2546]: I1008 19:52:41.843239 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/544e84c9-6bde-451d-a15b-ede760afcf38-kube-proxy\") pod \"kube-proxy-wd4hr\" (UID: \"544e84c9-6bde-451d-a15b-ede760afcf38\") " pod="kube-system/kube-proxy-wd4hr" Oct 8 19:52:41.843856 kubelet[2546]: I1008 19:52:41.843252 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-run\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843856 kubelet[2546]: I1008 19:52:41.843285 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5jf\" (UniqueName: \"kubernetes.io/projected/8d0164cf-8490-40f3-9f63-c56d4d161565-kube-api-access-kt5jf\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843856 kubelet[2546]: I1008 19:52:41.843319 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cni-path\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843856 kubelet[2546]: I1008 19:52:41.843336 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-xtables-lock\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.843856 kubelet[2546]: I1008 19:52:41.843363 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/544e84c9-6bde-451d-a15b-ede760afcf38-xtables-lock\") pod \"kube-proxy-wd4hr\" (UID: \"544e84c9-6bde-451d-a15b-ede760afcf38\") " pod="kube-system/kube-proxy-wd4hr" Oct 8 19:52:41.843856 kubelet[2546]: I1008 19:52:41.843378 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-config-path\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.844012 kubelet[2546]: I1008 19:52:41.843391 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d0164cf-8490-40f3-9f63-c56d4d161565-hubble-tls\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.844012 kubelet[2546]: I1008 19:52:41.843405 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-hostproc\") pod \"cilium-2gfrd\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " pod="kube-system/cilium-2gfrd" Oct 8 19:52:41.944072 kubelet[2546]: I1008 19:52:41.943947 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8cd8253-da9c-4cca-b85f-0457e4cc678e-cilium-config-path\") pod \"cilium-operator-5d85765b45-pbtgp\" (UID: \"a8cd8253-da9c-4cca-b85f-0457e4cc678e\") " pod="kube-system/cilium-operator-5d85765b45-pbtgp" Oct 8 19:52:41.944072 kubelet[2546]: I1008 19:52:41.944012 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjs2g\" (UniqueName: \"kubernetes.io/projected/a8cd8253-da9c-4cca-b85f-0457e4cc678e-kube-api-access-gjs2g\") pod \"cilium-operator-5d85765b45-pbtgp\" (UID: \"a8cd8253-da9c-4cca-b85f-0457e4cc678e\") " pod="kube-system/cilium-operator-5d85765b45-pbtgp" Oct 8 19:52:42.052775 kubelet[2546]: E1008 19:52:42.052225 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.053313 containerd[1462]: time="2024-10-08T19:52:42.053236747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2gfrd,Uid:8d0164cf-8490-40f3-9f63-c56d4d161565,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:42.060510 kubelet[2546]: E1008 19:52:42.060461 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.061438 containerd[1462]: time="2024-10-08T19:52:42.061215514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wd4hr,Uid:544e84c9-6bde-451d-a15b-ede760afcf38,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:42.098138 containerd[1462]: time="2024-10-08T19:52:42.096020145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:42.098138 containerd[1462]: time="2024-10-08T19:52:42.096163084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:42.098450 kubelet[2546]: E1008 19:52:42.096727 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.099436 containerd[1462]: time="2024-10-08T19:52:42.098921361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:42.101153 containerd[1462]: time="2024-10-08T19:52:42.099933000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:42.101237 kubelet[2546]: E1008 19:52:42.100487 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.104157 containerd[1462]: time="2024-10-08T19:52:42.103655688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pbtgp,Uid:a8cd8253-da9c-4cca-b85f-0457e4cc678e,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:42.116563 containerd[1462]: time="2024-10-08T19:52:42.115771050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:42.116563 containerd[1462]: time="2024-10-08T19:52:42.115976195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:42.116563 containerd[1462]: time="2024-10-08T19:52:42.116037660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:42.116563 containerd[1462]: time="2024-10-08T19:52:42.116313929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:42.132629 systemd[1]: Started cri-containerd-68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c.scope - libcontainer container 68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c. Oct 8 19:52:42.139107 systemd[1]: Started cri-containerd-7877d7fa1152aa7d4bcfe063a0203d0d8402548718866271691de286ba1b9422.scope - libcontainer container 7877d7fa1152aa7d4bcfe063a0203d0d8402548718866271691de286ba1b9422. Oct 8 19:52:42.169634 containerd[1462]: time="2024-10-08T19:52:42.169345535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:42.169634 containerd[1462]: time="2024-10-08T19:52:42.169447967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:42.169634 containerd[1462]: time="2024-10-08T19:52:42.169499273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:42.170317 containerd[1462]: time="2024-10-08T19:52:42.169702554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:42.188088 containerd[1462]: time="2024-10-08T19:52:42.188021371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2gfrd,Uid:8d0164cf-8490-40f3-9f63-c56d4d161565,Namespace:kube-system,Attempt:0,} returns sandbox id \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\"" Oct 8 19:52:42.192199 kubelet[2546]: E1008 19:52:42.190712 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.195413 containerd[1462]: time="2024-10-08T19:52:42.194309434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wd4hr,Uid:544e84c9-6bde-451d-a15b-ede760afcf38,Namespace:kube-system,Attempt:0,} returns sandbox id \"7877d7fa1152aa7d4bcfe063a0203d0d8402548718866271691de286ba1b9422\"" Oct 8 19:52:42.196025 containerd[1462]: time="2024-10-08T19:52:42.195984589Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 19:52:42.196847 kubelet[2546]: E1008 19:52:42.196820 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.197590 systemd[1]: Started cri-containerd-7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b.scope - libcontainer container 7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b. Oct 8 19:52:42.200849 containerd[1462]: time="2024-10-08T19:52:42.200759110Z" level=info msg="CreateContainer within sandbox \"7877d7fa1152aa7d4bcfe063a0203d0d8402548718866271691de286ba1b9422\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:52:42.228818 containerd[1462]: time="2024-10-08T19:52:42.228709637Z" level=info msg="CreateContainer within sandbox \"7877d7fa1152aa7d4bcfe063a0203d0d8402548718866271691de286ba1b9422\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5577743b92844e3af9c488c3be8343ce899ca1c472dc81031c5415d3f9028097\"" Oct 8 19:52:42.229620 containerd[1462]: time="2024-10-08T19:52:42.229590220Z" level=info msg="StartContainer for \"5577743b92844e3af9c488c3be8343ce899ca1c472dc81031c5415d3f9028097\"" Oct 8 19:52:42.256413 containerd[1462]: time="2024-10-08T19:52:42.256234043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pbtgp,Uid:a8cd8253-da9c-4cca-b85f-0457e4cc678e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\"" Oct 8 19:52:42.257031 kubelet[2546]: E1008 19:52:42.256995 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.277640 systemd[1]: Started cri-containerd-5577743b92844e3af9c488c3be8343ce899ca1c472dc81031c5415d3f9028097.scope - libcontainer container 5577743b92844e3af9c488c3be8343ce899ca1c472dc81031c5415d3f9028097. Oct 8 19:52:42.320405 containerd[1462]: time="2024-10-08T19:52:42.319231626Z" level=info msg="StartContainer for \"5577743b92844e3af9c488c3be8343ce899ca1c472dc81031c5415d3f9028097\" returns successfully" Oct 8 19:52:42.612314 kubelet[2546]: E1008 19:52:42.612135 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:43.101073 kubelet[2546]: E1008 19:52:43.101018 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:43.102480 kubelet[2546]: E1008 19:52:43.101957 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:43.334360 kubelet[2546]: I1008 19:52:43.334204 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wd4hr" podStartSLOduration=3.334151608 podStartE2EDuration="3.334151608s" podCreationTimestamp="2024-10-08 19:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:43.333639737 +0000 UTC m=+7.354231942" watchObservedRunningTime="2024-10-08 19:52:43.334151608 +0000 UTC m=+7.354743803" Oct 8 19:52:44.103192 kubelet[2546]: E1008 19:52:44.103148 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:50.107085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004117245.mount: Deactivated successfully. Oct 8 19:52:54.687865 containerd[1462]: time="2024-10-08T19:52:54.687752191Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:54.688944 containerd[1462]: time="2024-10-08T19:52:54.688866930Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735331" Oct 8 19:52:54.690766 containerd[1462]: time="2024-10-08T19:52:54.690666700Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:54.692482 containerd[1462]: time="2024-10-08T19:52:54.692427436Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.496388866s" Oct 8 19:52:54.692482 containerd[1462]: time="2024-10-08T19:52:54.692477991Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 8 19:52:54.698803 containerd[1462]: time="2024-10-08T19:52:54.698754874Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 19:52:54.716645 containerd[1462]: time="2024-10-08T19:52:54.716590715Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:52:54.737703 containerd[1462]: time="2024-10-08T19:52:54.737626634Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\"" Oct 8 19:52:54.741209 containerd[1462]: time="2024-10-08T19:52:54.741115315Z" level=info msg="StartContainer for \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\"" Oct 8 19:52:54.776449 systemd[1]: Started cri-containerd-48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed.scope - libcontainer container 48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed. Oct 8 19:52:54.813781 containerd[1462]: time="2024-10-08T19:52:54.813606283Z" level=info msg="StartContainer for \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\" returns successfully" Oct 8 19:52:54.828020 systemd[1]: cri-containerd-48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed.scope: Deactivated successfully. Oct 8 19:52:55.159121 kubelet[2546]: E1008 19:52:55.159068 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:55.731694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed-rootfs.mount: Deactivated successfully. Oct 8 19:52:56.160101 kubelet[2546]: E1008 19:52:56.160013 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:56.173847 containerd[1462]: time="2024-10-08T19:52:56.173759923Z" level=info msg="shim disconnected" id=48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed namespace=k8s.io Oct 8 19:52:56.173847 containerd[1462]: time="2024-10-08T19:52:56.173838496Z" level=warning msg="cleaning up after shim disconnected" id=48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed namespace=k8s.io Oct 8 19:52:56.173847 containerd[1462]: time="2024-10-08T19:52:56.173853465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:52:57.163173 kubelet[2546]: E1008 19:52:57.163131 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:57.166409 containerd[1462]: time="2024-10-08T19:52:57.166340521Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:52:57.432026 containerd[1462]: time="2024-10-08T19:52:57.431556849Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\"" Oct 8 19:52:57.434260 containerd[1462]: time="2024-10-08T19:52:57.434123040Z" level=info msg="StartContainer for \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\"" Oct 8 19:52:57.474496 systemd[1]: Started cri-containerd-4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077.scope - libcontainer container 4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077. Oct 8 19:52:57.571546 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:52:57.572342 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:52:57.572529 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:52:57.576118 containerd[1462]: time="2024-10-08T19:52:57.576038150Z" level=info msg="StartContainer for \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\" returns successfully" Oct 8 19:52:57.579812 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:52:57.580159 systemd[1]: cri-containerd-4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077.scope: Deactivated successfully. Oct 8 19:52:57.620495 containerd[1462]: time="2024-10-08T19:52:57.620413128Z" level=info msg="shim disconnected" id=4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077 namespace=k8s.io Oct 8 19:52:57.620495 containerd[1462]: time="2024-10-08T19:52:57.620491921Z" level=warning msg="cleaning up after shim disconnected" id=4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077 namespace=k8s.io Oct 8 19:52:57.620495 containerd[1462]: time="2024-10-08T19:52:57.620503374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:52:57.622042 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:52:57.956489 containerd[1462]: time="2024-10-08T19:52:57.956407124Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:57.958955 containerd[1462]: time="2024-10-08T19:52:57.958842852Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193" Oct 8 19:52:57.960769 containerd[1462]: time="2024-10-08T19:52:57.960707554Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:57.962238 containerd[1462]: time="2024-10-08T19:52:57.962169906Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.263363014s" Oct 8 19:52:57.962238 containerd[1462]: time="2024-10-08T19:52:57.962229993Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 8 19:52:57.964681 containerd[1462]: time="2024-10-08T19:52:57.964650621Z" level=info msg="CreateContainer within sandbox \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 19:52:57.986050 containerd[1462]: time="2024-10-08T19:52:57.985957359Z" level=info msg="CreateContainer within sandbox \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\"" Oct 8 19:52:57.986873 containerd[1462]: time="2024-10-08T19:52:57.986824809Z" level=info msg="StartContainer for \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\"" Oct 8 19:52:58.021468 systemd[1]: Started cri-containerd-62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406.scope - libcontainer container 62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406. Oct 8 19:52:58.057064 containerd[1462]: time="2024-10-08T19:52:58.056997570Z" level=info msg="StartContainer for \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\" returns successfully" Oct 8 19:52:58.168442 kubelet[2546]: E1008 19:52:58.168239 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:58.175025 containerd[1462]: time="2024-10-08T19:52:58.174215397Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:52:58.177794 kubelet[2546]: E1008 19:52:58.177619 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:58.211021 containerd[1462]: time="2024-10-08T19:52:58.210859679Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\"" Oct 8 19:52:58.212077 containerd[1462]: time="2024-10-08T19:52:58.212022670Z" level=info msg="StartContainer for \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\"" Oct 8 19:52:58.270497 systemd[1]: Started cri-containerd-d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99.scope - libcontainer container d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99. Oct 8 19:52:58.312373 systemd[1]: cri-containerd-d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99.scope: Deactivated successfully. Oct 8 19:52:58.424436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077-rootfs.mount: Deactivated successfully. Oct 8 19:52:58.635624 containerd[1462]: time="2024-10-08T19:52:58.635491764Z" level=info msg="StartContainer for \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\" returns successfully" Oct 8 19:52:58.698689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99-rootfs.mount: Deactivated successfully. Oct 8 19:52:58.883007 containerd[1462]: time="2024-10-08T19:52:58.882923627Z" level=info msg="shim disconnected" id=d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99 namespace=k8s.io Oct 8 19:52:58.883007 containerd[1462]: time="2024-10-08T19:52:58.882995476Z" level=warning msg="cleaning up after shim disconnected" id=d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99 namespace=k8s.io Oct 8 19:52:58.883007 containerd[1462]: time="2024-10-08T19:52:58.883007459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:52:58.905091 containerd[1462]: time="2024-10-08T19:52:58.901891674Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:52:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:52:59.183175 kubelet[2546]: E1008 19:52:59.182431 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:59.183175 kubelet[2546]: E1008 19:52:59.182431 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:59.185499 containerd[1462]: time="2024-10-08T19:52:59.185449534Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:52:59.265398 kubelet[2546]: I1008 19:52:59.265230 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pbtgp" podStartSLOduration=2.560811756 podStartE2EDuration="18.265199002s" podCreationTimestamp="2024-10-08 19:52:41 +0000 UTC" firstStartedPulling="2024-10-08 19:52:42.258946264 +0000 UTC m=+6.279538469" lastFinishedPulling="2024-10-08 19:52:57.96333352 +0000 UTC m=+21.983925715" observedRunningTime="2024-10-08 19:52:58.662169721 +0000 UTC m=+22.682761916" watchObservedRunningTime="2024-10-08 19:52:59.265199002 +0000 UTC m=+23.285791217" Oct 8 19:52:59.422185 containerd[1462]: time="2024-10-08T19:52:59.422100575Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\"" Oct 8 19:52:59.422930 containerd[1462]: time="2024-10-08T19:52:59.422788004Z" level=info msg="StartContainer for \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\"" Oct 8 19:52:59.485440 systemd[1]: Started cri-containerd-53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31.scope - libcontainer container 53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31. Oct 8 19:52:59.511064 systemd[1]: cri-containerd-53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31.scope: Deactivated successfully. Oct 8 19:52:59.584090 containerd[1462]: time="2024-10-08T19:52:59.583997818Z" level=info msg="StartContainer for \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\" returns successfully" Oct 8 19:52:59.605850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31-rootfs.mount: Deactivated successfully. Oct 8 19:52:59.878594 containerd[1462]: time="2024-10-08T19:52:59.878510726Z" level=info msg="shim disconnected" id=53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31 namespace=k8s.io Oct 8 19:52:59.878594 containerd[1462]: time="2024-10-08T19:52:59.878575492Z" level=warning msg="cleaning up after shim disconnected" id=53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31 namespace=k8s.io Oct 8 19:52:59.878594 containerd[1462]: time="2024-10-08T19:52:59.878588116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:53:00.183682 kubelet[2546]: E1008 19:53:00.183512 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:00.186491 containerd[1462]: time="2024-10-08T19:53:00.186437697Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:53:00.246123 containerd[1462]: time="2024-10-08T19:53:00.246027541Z" level=info msg="CreateContainer within sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\"" Oct 8 19:53:00.246974 containerd[1462]: time="2024-10-08T19:53:00.246925346Z" level=info msg="StartContainer for \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\"" Oct 8 19:53:00.282475 systemd[1]: Started cri-containerd-1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4.scope - libcontainer container 1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4. Oct 8 19:53:00.322645 containerd[1462]: time="2024-10-08T19:53:00.322585316Z" level=info msg="StartContainer for \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\" returns successfully" Oct 8 19:53:00.466247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151910025.mount: Deactivated successfully. Oct 8 19:53:00.480497 kubelet[2546]: I1008 19:53:00.480448 2546 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 19:53:00.606576 kubelet[2546]: W1008 19:53:00.606533 2546 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:53:00.606777 kubelet[2546]: E1008 19:53:00.606580 2546 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 8 19:53:00.612035 systemd[1]: Created slice kubepods-burstable-pod481308d0_70d6_4062_856e_65edabbcdc76.slice - libcontainer container kubepods-burstable-pod481308d0_70d6_4062_856e_65edabbcdc76.slice. Oct 8 19:53:00.617586 systemd[1]: Created slice kubepods-burstable-pod31bb8690_5a9a_4595_8b32_b6500fa00dd4.slice - libcontainer container kubepods-burstable-pod31bb8690_5a9a_4595_8b32_b6500fa00dd4.slice. Oct 8 19:53:00.741637 kubelet[2546]: I1008 19:53:00.741457 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwqgw\" (UniqueName: \"kubernetes.io/projected/481308d0-70d6-4062-856e-65edabbcdc76-kube-api-access-dwqgw\") pod \"coredns-6f6b679f8f-dqmtr\" (UID: \"481308d0-70d6-4062-856e-65edabbcdc76\") " pod="kube-system/coredns-6f6b679f8f-dqmtr" Oct 8 19:53:00.741637 kubelet[2546]: I1008 19:53:00.741518 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c474q\" (UniqueName: \"kubernetes.io/projected/31bb8690-5a9a-4595-8b32-b6500fa00dd4-kube-api-access-c474q\") pod \"coredns-6f6b679f8f-j4n65\" (UID: \"31bb8690-5a9a-4595-8b32-b6500fa00dd4\") " pod="kube-system/coredns-6f6b679f8f-j4n65" Oct 8 19:53:00.741637 kubelet[2546]: I1008 19:53:00.741536 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31bb8690-5a9a-4595-8b32-b6500fa00dd4-config-volume\") pod \"coredns-6f6b679f8f-j4n65\" (UID: \"31bb8690-5a9a-4595-8b32-b6500fa00dd4\") " pod="kube-system/coredns-6f6b679f8f-j4n65" Oct 8 19:53:00.741637 kubelet[2546]: I1008 19:53:00.741556 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/481308d0-70d6-4062-856e-65edabbcdc76-config-volume\") pod \"coredns-6f6b679f8f-dqmtr\" (UID: \"481308d0-70d6-4062-856e-65edabbcdc76\") " pod="kube-system/coredns-6f6b679f8f-dqmtr" Oct 8 19:53:01.188746 kubelet[2546]: E1008 19:53:01.188687 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:01.333418 kubelet[2546]: I1008 19:53:01.333315 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2gfrd" podStartSLOduration=8.829409934 podStartE2EDuration="21.333292725s" podCreationTimestamp="2024-10-08 19:52:40 +0000 UTC" firstStartedPulling="2024-10-08 19:52:42.194614327 +0000 UTC m=+6.215206522" lastFinishedPulling="2024-10-08 19:52:54.698497118 +0000 UTC m=+18.719089313" observedRunningTime="2024-10-08 19:53:01.333112206 +0000 UTC m=+25.353704401" watchObservedRunningTime="2024-10-08 19:53:01.333292725 +0000 UTC m=+25.353884940" Oct 8 19:53:01.830835 kubelet[2546]: E1008 19:53:01.830766 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:01.831003 kubelet[2546]: E1008 19:53:01.830907 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:01.837289 containerd[1462]: time="2024-10-08T19:53:01.836957948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dqmtr,Uid:481308d0-70d6-4062-856e-65edabbcdc76,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:01.847783 containerd[1462]: time="2024-10-08T19:53:01.847720433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j4n65,Uid:31bb8690-5a9a-4595-8b32-b6500fa00dd4,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:02.190720 kubelet[2546]: E1008 19:53:02.190555 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:02.908393 systemd-networkd[1387]: cilium_host: Link UP Oct 8 19:53:02.908630 systemd-networkd[1387]: cilium_net: Link UP Oct 8 19:53:02.910308 systemd-networkd[1387]: cilium_net: Gained carrier Oct 8 19:53:02.911154 systemd-networkd[1387]: cilium_host: Gained carrier Oct 8 19:53:02.911924 systemd-networkd[1387]: cilium_net: Gained IPv6LL Oct 8 19:53:02.912911 systemd-networkd[1387]: cilium_host: Gained IPv6LL Oct 8 19:53:03.035455 systemd-networkd[1387]: cilium_vxlan: Link UP Oct 8 19:53:03.035470 systemd-networkd[1387]: cilium_vxlan: Gained carrier Oct 8 19:53:03.315332 kernel: NET: Registered PF_ALG protocol family Oct 8 19:53:04.112990 systemd-networkd[1387]: lxc_health: Link UP Oct 8 19:53:04.118188 systemd-networkd[1387]: lxc_health: Gained carrier Oct 8 19:53:04.486055 systemd-networkd[1387]: lxc214cfc8d5b22: Link UP Oct 8 19:53:04.494314 kernel: eth0: renamed from tmpcf702 Oct 8 19:53:04.504981 systemd-networkd[1387]: lxc7e408bff1f04: Link UP Oct 8 19:53:04.554684 kernel: eth0: renamed from tmpd5a4b Oct 8 19:53:04.560139 systemd-networkd[1387]: lxc214cfc8d5b22: Gained carrier Oct 8 19:53:04.560781 systemd-networkd[1387]: lxc7e408bff1f04: Gained carrier Oct 8 19:53:04.925572 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL Oct 8 19:53:06.054895 kubelet[2546]: E1008 19:53:06.054628 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:06.081422 systemd-networkd[1387]: lxc214cfc8d5b22: Gained IPv6LL Oct 8 19:53:06.081791 systemd-networkd[1387]: lxc_health: Gained IPv6LL Oct 8 19:53:06.082038 systemd-networkd[1387]: lxc7e408bff1f04: Gained IPv6LL Oct 8 19:53:06.195182 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:35430.service - OpenSSH per-connection server daemon (10.0.0.1:35430). Oct 8 19:53:06.198216 kubelet[2546]: E1008 19:53:06.198193 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:06.251132 sshd[3761]: Accepted publickey for core from 10.0.0.1 port 35430 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:06.253869 sshd[3761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:06.265360 systemd-logind[1443]: New session 8 of user core. Oct 8 19:53:06.271363 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:53:06.461109 sshd[3761]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:06.465893 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:35430.service: Deactivated successfully. Oct 8 19:53:06.468014 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:53:06.468778 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:53:06.469978 systemd-logind[1443]: Removed session 8. Oct 8 19:53:07.200044 kubelet[2546]: E1008 19:53:07.199990 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:08.445369 containerd[1462]: time="2024-10-08T19:53:08.444752950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:08.445369 containerd[1462]: time="2024-10-08T19:53:08.444850056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:08.445369 containerd[1462]: time="2024-10-08T19:53:08.444860626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:08.445369 containerd[1462]: time="2024-10-08T19:53:08.444971069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:08.450844 containerd[1462]: time="2024-10-08T19:53:08.450102335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:08.450844 containerd[1462]: time="2024-10-08T19:53:08.450167400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:08.451026 containerd[1462]: time="2024-10-08T19:53:08.450181867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:08.451026 containerd[1462]: time="2024-10-08T19:53:08.450285497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:08.485531 systemd[1]: Started cri-containerd-cf7027120db4a1432cca99a7089314711855c15273c1312c39c58c163011d531.scope - libcontainer container cf7027120db4a1432cca99a7089314711855c15273c1312c39c58c163011d531. Oct 8 19:53:08.487989 systemd[1]: Started cri-containerd-d5a4bbb6c0a3e9f698f663a21434c8ca88850a04b6bc746c1ff1e9afd66b7ffd.scope - libcontainer container d5a4bbb6c0a3e9f698f663a21434c8ca88850a04b6bc746c1ff1e9afd66b7ffd. Oct 8 19:53:08.501873 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:53:08.503806 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:53:08.528168 containerd[1462]: time="2024-10-08T19:53:08.528127742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j4n65,Uid:31bb8690-5a9a-4595-8b32-b6500fa00dd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf7027120db4a1432cca99a7089314711855c15273c1312c39c58c163011d531\"" Oct 8 19:53:08.529097 kubelet[2546]: E1008 19:53:08.528907 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:08.532324 containerd[1462]: time="2024-10-08T19:53:08.532231314Z" level=info msg="CreateContainer within sandbox \"cf7027120db4a1432cca99a7089314711855c15273c1312c39c58c163011d531\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:53:08.533848 containerd[1462]: time="2024-10-08T19:53:08.533681770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dqmtr,Uid:481308d0-70d6-4062-856e-65edabbcdc76,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5a4bbb6c0a3e9f698f663a21434c8ca88850a04b6bc746c1ff1e9afd66b7ffd\"" Oct 8 19:53:08.536408 kubelet[2546]: E1008 19:53:08.536369 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:08.541410 containerd[1462]: time="2024-10-08T19:53:08.541292840Z" level=info msg="CreateContainer within sandbox \"d5a4bbb6c0a3e9f698f663a21434c8ca88850a04b6bc746c1ff1e9afd66b7ffd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:53:09.066297 containerd[1462]: time="2024-10-08T19:53:09.066196322Z" level=info msg="CreateContainer within sandbox \"cf7027120db4a1432cca99a7089314711855c15273c1312c39c58c163011d531\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34ed562f954ed602d96b5fe772f5c3c845d8a0b58807faee94dabed46c16317c\"" Oct 8 19:53:09.067112 containerd[1462]: time="2024-10-08T19:53:09.067061583Z" level=info msg="StartContainer for \"34ed562f954ed602d96b5fe772f5c3c845d8a0b58807faee94dabed46c16317c\"" Oct 8 19:53:09.067951 containerd[1462]: time="2024-10-08T19:53:09.067874133Z" level=info msg="CreateContainer within sandbox \"d5a4bbb6c0a3e9f698f663a21434c8ca88850a04b6bc746c1ff1e9afd66b7ffd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86076138df122a0503c26d062a5bc1ab47ec2891e2d6aa6b29a3fe9681e867bd\"" Oct 8 19:53:09.068736 containerd[1462]: time="2024-10-08T19:53:09.068704177Z" level=info msg="StartContainer for \"86076138df122a0503c26d062a5bc1ab47ec2891e2d6aa6b29a3fe9681e867bd\"" Oct 8 19:53:09.104508 systemd[1]: Started cri-containerd-34ed562f954ed602d96b5fe772f5c3c845d8a0b58807faee94dabed46c16317c.scope - libcontainer container 34ed562f954ed602d96b5fe772f5c3c845d8a0b58807faee94dabed46c16317c. Oct 8 19:53:09.106125 systemd[1]: Started cri-containerd-86076138df122a0503c26d062a5bc1ab47ec2891e2d6aa6b29a3fe9681e867bd.scope - libcontainer container 86076138df122a0503c26d062a5bc1ab47ec2891e2d6aa6b29a3fe9681e867bd. Oct 8 19:53:09.140847 containerd[1462]: time="2024-10-08T19:53:09.140784066Z" level=info msg="StartContainer for \"34ed562f954ed602d96b5fe772f5c3c845d8a0b58807faee94dabed46c16317c\" returns successfully" Oct 8 19:53:09.149727 containerd[1462]: time="2024-10-08T19:53:09.149652821Z" level=info msg="StartContainer for \"86076138df122a0503c26d062a5bc1ab47ec2891e2d6aa6b29a3fe9681e867bd\" returns successfully" Oct 8 19:53:09.206519 kubelet[2546]: E1008 19:53:09.206225 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:09.207950 kubelet[2546]: E1008 19:53:09.207919 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:09.223506 kubelet[2546]: I1008 19:53:09.223379 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dqmtr" podStartSLOduration=28.223359534 podStartE2EDuration="28.223359534s" podCreationTimestamp="2024-10-08 19:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:53:09.222344145 +0000 UTC m=+33.242936350" watchObservedRunningTime="2024-10-08 19:53:09.223359534 +0000 UTC m=+33.243951729" Oct 8 19:53:10.209164 kubelet[2546]: E1008 19:53:10.209102 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:10.210340 kubelet[2546]: E1008 19:53:10.210256 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:10.457517 kubelet[2546]: I1008 19:53:10.457422 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-j4n65" podStartSLOduration=29.457398777 podStartE2EDuration="29.457398777s" podCreationTimestamp="2024-10-08 19:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:53:09.239685975 +0000 UTC m=+33.260278170" watchObservedRunningTime="2024-10-08 19:53:10.457398777 +0000 UTC m=+34.477990972" Oct 8 19:53:11.211003 kubelet[2546]: E1008 19:53:11.210924 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:11.212395 kubelet[2546]: E1008 19:53:11.211044 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:11.475873 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:36626.service - OpenSSH per-connection server daemon (10.0.0.1:36626). Oct 8 19:53:11.512861 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 36626 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:11.514604 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:11.519036 systemd-logind[1443]: New session 9 of user core. Oct 8 19:53:11.529513 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:53:11.662440 sshd[3945]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:11.666239 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:36626.service: Deactivated successfully. Oct 8 19:53:11.668494 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:53:11.669206 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:53:11.670609 systemd-logind[1443]: Removed session 9. Oct 8 19:53:16.676574 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:36638.service - OpenSSH per-connection server daemon (10.0.0.1:36638). Oct 8 19:53:16.713956 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 36638 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:16.716746 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:16.723099 systemd-logind[1443]: New session 10 of user core. Oct 8 19:53:16.731493 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:53:16.888605 sshd[3969]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:16.892876 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:36638.service: Deactivated successfully. Oct 8 19:53:16.895072 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:53:16.895866 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:53:16.897064 systemd-logind[1443]: Removed session 10. Oct 8 19:53:21.900753 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:36114.service - OpenSSH per-connection server daemon (10.0.0.1:36114). Oct 8 19:53:21.936259 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 36114 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:21.938109 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:21.942753 systemd-logind[1443]: New session 11 of user core. Oct 8 19:53:21.955511 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:53:22.076088 sshd[3985]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:22.080466 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:36114.service: Deactivated successfully. Oct 8 19:53:22.083106 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:53:22.083810 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:53:22.084812 systemd-logind[1443]: Removed session 11. Oct 8 19:53:27.090168 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:36126.service - OpenSSH per-connection server daemon (10.0.0.1:36126). Oct 8 19:53:27.128680 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 36126 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:27.130844 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:27.135980 systemd-logind[1443]: New session 12 of user core. Oct 8 19:53:27.146555 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:53:27.336302 sshd[4001]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:27.349159 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:36126.service: Deactivated successfully. Oct 8 19:53:27.352174 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:53:27.354504 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:53:27.363699 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:36138.service - OpenSSH per-connection server daemon (10.0.0.1:36138). Oct 8 19:53:27.364893 systemd-logind[1443]: Removed session 12. Oct 8 19:53:27.397205 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 36138 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:27.399370 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:27.404162 systemd-logind[1443]: New session 13 of user core. Oct 8 19:53:27.411443 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:53:27.702737 sshd[4017]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:27.715724 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:36138.service: Deactivated successfully. Oct 8 19:53:27.718219 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:53:27.720856 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:53:27.733745 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:36144.service - OpenSSH per-connection server daemon (10.0.0.1:36144). Oct 8 19:53:27.734798 systemd-logind[1443]: Removed session 13. Oct 8 19:53:27.767684 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 36144 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:27.769456 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:27.774252 systemd-logind[1443]: New session 14 of user core. Oct 8 19:53:27.785451 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:53:27.958475 sshd[4029]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:27.962807 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:36144.service: Deactivated successfully. Oct 8 19:53:27.964920 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:53:27.965714 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:53:27.966681 systemd-logind[1443]: Removed session 14. Oct 8 19:53:32.973023 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:49840.service - OpenSSH per-connection server daemon (10.0.0.1:49840). Oct 8 19:53:33.006275 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 49840 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:33.007975 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:33.012203 systemd-logind[1443]: New session 15 of user core. Oct 8 19:53:33.020410 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:53:33.140833 sshd[4043]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:33.145479 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:49840.service: Deactivated successfully. Oct 8 19:53:33.147884 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:53:33.148607 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:53:33.149879 systemd-logind[1443]: Removed session 15. Oct 8 19:53:38.169766 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:49848.service - OpenSSH per-connection server daemon (10.0.0.1:49848). Oct 8 19:53:38.211798 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 49848 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:38.215625 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:38.224543 systemd-logind[1443]: New session 16 of user core. Oct 8 19:53:38.234640 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:53:38.380775 sshd[4059]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:38.386411 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:49848.service: Deactivated successfully. Oct 8 19:53:38.389569 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:53:38.390358 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:53:38.391647 systemd-logind[1443]: Removed session 16. Oct 8 19:53:43.400560 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:51196.service - OpenSSH per-connection server daemon (10.0.0.1:51196). Oct 8 19:53:43.427322 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 51196 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:43.429201 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:43.433303 systemd-logind[1443]: New session 17 of user core. Oct 8 19:53:43.442413 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:53:43.604608 sshd[4076]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:43.617088 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:51196.service: Deactivated successfully. Oct 8 19:53:43.618874 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:53:43.620672 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:53:43.622142 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:51212.service - OpenSSH per-connection server daemon (10.0.0.1:51212). Oct 8 19:53:43.623021 systemd-logind[1443]: Removed session 17. Oct 8 19:53:43.655263 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 51212 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:43.657118 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:43.661198 systemd-logind[1443]: New session 18 of user core. Oct 8 19:53:43.671413 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:53:44.308750 sshd[4090]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:44.320170 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:51212.service: Deactivated successfully. Oct 8 19:53:44.322129 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:53:44.323690 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:53:44.325194 systemd[1]: Started sshd@18-10.0.0.24:22-10.0.0.1:51226.service - OpenSSH per-connection server daemon (10.0.0.1:51226). Oct 8 19:53:44.326528 systemd-logind[1443]: Removed session 18. Oct 8 19:53:44.374195 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 51226 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:44.375832 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:44.380052 systemd-logind[1443]: New session 19 of user core. Oct 8 19:53:44.389423 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:53:45.071510 kubelet[2546]: E1008 19:53:45.071460 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:46.032986 sshd[4103]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:46.043100 systemd[1]: sshd@18-10.0.0.24:22-10.0.0.1:51226.service: Deactivated successfully. Oct 8 19:53:46.045214 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:53:46.047191 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:53:46.052697 systemd[1]: Started sshd@19-10.0.0.24:22-10.0.0.1:51232.service - OpenSSH per-connection server daemon (10.0.0.1:51232). Oct 8 19:53:46.054521 systemd-logind[1443]: Removed session 19. Oct 8 19:53:46.085560 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 51232 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:46.087552 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:46.091855 systemd-logind[1443]: New session 20 of user core. Oct 8 19:53:46.101394 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:53:46.357561 sshd[4136]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:46.368232 systemd[1]: sshd@19-10.0.0.24:22-10.0.0.1:51232.service: Deactivated successfully. Oct 8 19:53:46.370907 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:53:46.373773 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:53:46.383671 systemd[1]: Started sshd@20-10.0.0.24:22-10.0.0.1:51248.service - OpenSSH per-connection server daemon (10.0.0.1:51248). Oct 8 19:53:46.384955 systemd-logind[1443]: Removed session 20. Oct 8 19:53:46.413839 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 51248 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:46.415840 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:46.420746 systemd-logind[1443]: New session 21 of user core. Oct 8 19:53:46.432428 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:53:46.548350 sshd[4148]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:46.552245 systemd[1]: sshd@20-10.0.0.24:22-10.0.0.1:51248.service: Deactivated successfully. Oct 8 19:53:46.554486 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:53:46.555207 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:53:46.556233 systemd-logind[1443]: Removed session 21. Oct 8 19:53:50.071232 kubelet[2546]: E1008 19:53:50.071174 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:51.565915 systemd[1]: Started sshd@21-10.0.0.24:22-10.0.0.1:39944.service - OpenSSH per-connection server daemon (10.0.0.1:39944). Oct 8 19:53:51.601690 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 39944 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:51.603767 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:51.608428 systemd-logind[1443]: New session 22 of user core. Oct 8 19:53:51.623516 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:53:51.742101 sshd[4163]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:51.745809 systemd[1]: sshd@21-10.0.0.24:22-10.0.0.1:39944.service: Deactivated successfully. Oct 8 19:53:51.748601 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:53:51.750733 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:53:51.752059 systemd-logind[1443]: Removed session 22. Oct 8 19:53:55.071093 kubelet[2546]: E1008 19:53:55.071023 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:56.755970 systemd[1]: Started sshd@22-10.0.0.24:22-10.0.0.1:39960.service - OpenSSH per-connection server daemon (10.0.0.1:39960). Oct 8 19:53:56.791174 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 39960 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:53:56.792858 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:56.797026 systemd-logind[1443]: New session 23 of user core. Oct 8 19:53:56.803417 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:53:56.914878 sshd[4180]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:56.919842 systemd[1]: sshd@22-10.0.0.24:22-10.0.0.1:39960.service: Deactivated successfully. Oct 8 19:53:56.922183 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:53:56.922847 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:53:56.923984 systemd-logind[1443]: Removed session 23. Oct 8 19:54:01.926615 systemd[1]: Started sshd@23-10.0.0.24:22-10.0.0.1:54448.service - OpenSSH per-connection server daemon (10.0.0.1:54448). Oct 8 19:54:01.959923 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 54448 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:01.961880 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:01.966492 systemd-logind[1443]: New session 24 of user core. Oct 8 19:54:01.983454 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:54:02.110288 sshd[4195]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:02.114787 systemd[1]: sshd@23-10.0.0.24:22-10.0.0.1:54448.service: Deactivated successfully. Oct 8 19:54:02.117293 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:54:02.118040 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:54:02.119092 systemd-logind[1443]: Removed session 24. Oct 8 19:54:03.071317 kubelet[2546]: E1008 19:54:03.071210 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:07.123425 systemd[1]: Started sshd@24-10.0.0.24:22-10.0.0.1:54460.service - OpenSSH per-connection server daemon (10.0.0.1:54460). Oct 8 19:54:07.159783 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 54460 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:07.161768 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:07.166993 systemd-logind[1443]: New session 25 of user core. Oct 8 19:54:07.174610 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:54:07.331760 sshd[4210]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:07.335842 systemd[1]: sshd@24-10.0.0.24:22-10.0.0.1:54460.service: Deactivated successfully. Oct 8 19:54:07.337954 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:54:07.338797 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:54:07.339778 systemd-logind[1443]: Removed session 25. Oct 8 19:54:12.350107 systemd[1]: Started sshd@25-10.0.0.24:22-10.0.0.1:58486.service - OpenSSH per-connection server daemon (10.0.0.1:58486). Oct 8 19:54:12.385420 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 58486 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:12.387636 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:12.391983 systemd-logind[1443]: New session 26 of user core. Oct 8 19:54:12.400416 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:54:12.517424 sshd[4224]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:12.530235 systemd[1]: sshd@25-10.0.0.24:22-10.0.0.1:58486.service: Deactivated successfully. Oct 8 19:54:12.533112 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:54:12.535340 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:54:12.547776 systemd[1]: Started sshd@26-10.0.0.24:22-10.0.0.1:58496.service - OpenSSH per-connection server daemon (10.0.0.1:58496). Oct 8 19:54:12.548976 systemd-logind[1443]: Removed session 26. Oct 8 19:54:12.577696 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 58496 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:12.579772 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:12.584961 systemd-logind[1443]: New session 27 of user core. Oct 8 19:54:12.595631 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:54:14.072685 kubelet[2546]: E1008 19:54:14.072425 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:14.510647 containerd[1462]: time="2024-10-08T19:54:14.510563560Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:54:14.520337 containerd[1462]: time="2024-10-08T19:54:14.520290297Z" level=info msg="StopContainer for \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\" with timeout 2 (s)" Oct 8 19:54:14.520602 containerd[1462]: time="2024-10-08T19:54:14.520562621Z" level=info msg="Stop container \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\" with signal terminated" Oct 8 19:54:14.528690 systemd-networkd[1387]: lxc_health: Link DOWN Oct 8 19:54:14.528698 systemd-networkd[1387]: lxc_health: Lost carrier Oct 8 19:54:14.558467 systemd[1]: cri-containerd-1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4.scope: Deactivated successfully. Oct 8 19:54:14.558859 systemd[1]: cri-containerd-1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4.scope: Consumed 7.727s CPU time. Oct 8 19:54:14.560551 containerd[1462]: time="2024-10-08T19:54:14.560099438Z" level=info msg="StopContainer for \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\" with timeout 30 (s)" Oct 8 19:54:14.560660 containerd[1462]: time="2024-10-08T19:54:14.560626142Z" level=info msg="Stop container \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\" with signal terminated" Oct 8 19:54:14.577084 systemd[1]: cri-containerd-62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406.scope: Deactivated successfully. Oct 8 19:54:14.585641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4-rootfs.mount: Deactivated successfully. Oct 8 19:54:14.601957 containerd[1462]: time="2024-10-08T19:54:14.601738223Z" level=info msg="shim disconnected" id=1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4 namespace=k8s.io Oct 8 19:54:14.601957 containerd[1462]: time="2024-10-08T19:54:14.601823103Z" level=warning msg="cleaning up after shim disconnected" id=1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4 namespace=k8s.io Oct 8 19:54:14.601957 containerd[1462]: time="2024-10-08T19:54:14.601836428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:14.605122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406-rootfs.mount: Deactivated successfully. Oct 8 19:54:14.610597 containerd[1462]: time="2024-10-08T19:54:14.610501652Z" level=info msg="shim disconnected" id=62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406 namespace=k8s.io Oct 8 19:54:14.610597 containerd[1462]: time="2024-10-08T19:54:14.610577786Z" level=warning msg="cleaning up after shim disconnected" id=62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406 namespace=k8s.io Oct 8 19:54:14.610597 containerd[1462]: time="2024-10-08T19:54:14.610590570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:14.623472 containerd[1462]: time="2024-10-08T19:54:14.623346104Z" level=info msg="StopContainer for \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\" returns successfully" Oct 8 19:54:14.628125 containerd[1462]: time="2024-10-08T19:54:14.627889238Z" level=info msg="StopPodSandbox for \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\"" Oct 8 19:54:14.628125 containerd[1462]: time="2024-10-08T19:54:14.627960813Z" level=info msg="Container to stop \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:54:14.628125 containerd[1462]: time="2024-10-08T19:54:14.627980540Z" level=info msg="Container to stop \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:54:14.628125 containerd[1462]: time="2024-10-08T19:54:14.627992583Z" level=info msg="Container to stop \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:54:14.628125 containerd[1462]: time="2024-10-08T19:54:14.628005206Z" level=info msg="Container to stop \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:54:14.628125 containerd[1462]: time="2024-10-08T19:54:14.628017149Z" level=info msg="Container to stop \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:54:14.630540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c-shm.mount: Deactivated successfully. Oct 8 19:54:14.635200 containerd[1462]: time="2024-10-08T19:54:14.635128852Z" level=info msg="StopContainer for \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\" returns successfully" Oct 8 19:54:14.635993 containerd[1462]: time="2024-10-08T19:54:14.635963506Z" level=info msg="StopPodSandbox for \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\"" Oct 8 19:54:14.637539 systemd[1]: cri-containerd-68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c.scope: Deactivated successfully. Oct 8 19:54:14.637781 containerd[1462]: time="2024-10-08T19:54:14.637749207Z" level=info msg="Container to stop \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:54:14.640825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b-shm.mount: Deactivated successfully. Oct 8 19:54:14.654929 systemd[1]: cri-containerd-7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b.scope: Deactivated successfully. Oct 8 19:54:14.670506 containerd[1462]: time="2024-10-08T19:54:14.670418574Z" level=info msg="shim disconnected" id=68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c namespace=k8s.io Oct 8 19:54:14.670506 containerd[1462]: time="2024-10-08T19:54:14.670484858Z" level=warning msg="cleaning up after shim disconnected" id=68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c namespace=k8s.io Oct 8 19:54:14.670506 containerd[1462]: time="2024-10-08T19:54:14.670494066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:14.686745 containerd[1462]: time="2024-10-08T19:54:14.686650438Z" level=info msg="shim disconnected" id=7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b namespace=k8s.io Oct 8 19:54:14.686745 containerd[1462]: time="2024-10-08T19:54:14.686752560Z" level=warning msg="cleaning up after shim disconnected" id=7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b namespace=k8s.io Oct 8 19:54:14.687057 containerd[1462]: time="2024-10-08T19:54:14.686762409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:14.700696 containerd[1462]: time="2024-10-08T19:54:14.700627857Z" level=info msg="TearDown network for sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" successfully" Oct 8 19:54:14.700696 containerd[1462]: time="2024-10-08T19:54:14.700675367Z" level=info msg="StopPodSandbox for \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" returns successfully" Oct 8 19:54:14.702648 containerd[1462]: time="2024-10-08T19:54:14.702612132Z" level=info msg="TearDown network for sandbox \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\" successfully" Oct 8 19:54:14.702648 containerd[1462]: time="2024-10-08T19:54:14.702637460Z" level=info msg="StopPodSandbox for \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\" returns successfully" Oct 8 19:54:14.898840 kubelet[2546]: I1008 19:54:14.898088 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-host-proc-sys-kernel\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.898840 kubelet[2546]: I1008 19:54:14.898149 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-config-path\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.898840 kubelet[2546]: I1008 19:54:14.898172 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-cgroup\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.898840 kubelet[2546]: I1008 19:54:14.898206 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d0164cf-8490-40f3-9f63-c56d4d161565-clustermesh-secrets\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.898840 kubelet[2546]: I1008 19:54:14.898227 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-host-proc-sys-net\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.898840 kubelet[2546]: I1008 19:54:14.898245 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-run\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899206 kubelet[2546]: I1008 19:54:14.898250 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.899206 kubelet[2546]: I1008 19:54:14.898294 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-hostproc\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899206 kubelet[2546]: I1008 19:54:14.898341 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-hostproc" (OuterVolumeSpecName: "hostproc") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.899206 kubelet[2546]: I1008 19:54:14.898370 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjs2g\" (UniqueName: \"kubernetes.io/projected/a8cd8253-da9c-4cca-b85f-0457e4cc678e-kube-api-access-gjs2g\") pod \"a8cd8253-da9c-4cca-b85f-0457e4cc678e\" (UID: \"a8cd8253-da9c-4cca-b85f-0457e4cc678e\") " Oct 8 19:54:14.899206 kubelet[2546]: I1008 19:54:14.898393 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-bpf-maps\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899206 kubelet[2546]: I1008 19:54:14.898409 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cni-path\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899437 kubelet[2546]: I1008 19:54:14.898422 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-etc-cni-netd\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899437 kubelet[2546]: I1008 19:54:14.898439 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d0164cf-8490-40f3-9f63-c56d4d161565-hubble-tls\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899437 kubelet[2546]: I1008 19:54:14.898456 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8cd8253-da9c-4cca-b85f-0457e4cc678e-cilium-config-path\") pod \"a8cd8253-da9c-4cca-b85f-0457e4cc678e\" (UID: \"a8cd8253-da9c-4cca-b85f-0457e4cc678e\") " Oct 8 19:54:14.899437 kubelet[2546]: I1008 19:54:14.898470 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5jf\" (UniqueName: \"kubernetes.io/projected/8d0164cf-8490-40f3-9f63-c56d4d161565-kube-api-access-kt5jf\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899437 kubelet[2546]: I1008 19:54:14.898483 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-lib-modules\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899437 kubelet[2546]: I1008 19:54:14.898496 2546 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-xtables-lock\") pod \"8d0164cf-8490-40f3-9f63-c56d4d161565\" (UID: \"8d0164cf-8490-40f3-9f63-c56d4d161565\") " Oct 8 19:54:14.899647 kubelet[2546]: I1008 19:54:14.898535 2546 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.899647 kubelet[2546]: I1008 19:54:14.898545 2546 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.899647 kubelet[2546]: I1008 19:54:14.898577 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.900545 kubelet[2546]: I1008 19:54:14.900390 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.900545 kubelet[2546]: I1008 19:54:14.900447 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.900545 kubelet[2546]: I1008 19:54:14.900467 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.903343 kubelet[2546]: I1008 19:54:14.902321 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:54:14.904239 kubelet[2546]: I1008 19:54:14.904195 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0164cf-8490-40f3-9f63-c56d4d161565-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:54:14.904331 kubelet[2546]: I1008 19:54:14.904299 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cni-path" (OuterVolumeSpecName: "cni-path") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.904331 kubelet[2546]: I1008 19:54:14.904318 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.904501 kubelet[2546]: I1008 19:54:14.904336 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.904501 kubelet[2546]: I1008 19:54:14.904351 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:54:14.905960 kubelet[2546]: I1008 19:54:14.905895 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8cd8253-da9c-4cca-b85f-0457e4cc678e-kube-api-access-gjs2g" (OuterVolumeSpecName: "kube-api-access-gjs2g") pod "a8cd8253-da9c-4cca-b85f-0457e4cc678e" (UID: "a8cd8253-da9c-4cca-b85f-0457e4cc678e"). InnerVolumeSpecName "kube-api-access-gjs2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:54:14.906589 kubelet[2546]: I1008 19:54:14.906551 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0164cf-8490-40f3-9f63-c56d4d161565-kube-api-access-kt5jf" (OuterVolumeSpecName: "kube-api-access-kt5jf") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "kube-api-access-kt5jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:54:14.906660 kubelet[2546]: I1008 19:54:14.906600 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0164cf-8490-40f3-9f63-c56d4d161565-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8d0164cf-8490-40f3-9f63-c56d4d161565" (UID: "8d0164cf-8490-40f3-9f63-c56d4d161565"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:54:14.907807 kubelet[2546]: I1008 19:54:14.907779 2546 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8cd8253-da9c-4cca-b85f-0457e4cc678e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8cd8253-da9c-4cca-b85f-0457e4cc678e" (UID: "a8cd8253-da9c-4cca-b85f-0457e4cc678e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:54:14.999019 kubelet[2546]: I1008 19:54:14.998953 2546 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999019 kubelet[2546]: I1008 19:54:14.999007 2546 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999019 kubelet[2546]: I1008 19:54:14.999020 2546 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gjs2g\" (UniqueName: \"kubernetes.io/projected/a8cd8253-da9c-4cca-b85f-0457e4cc678e-kube-api-access-gjs2g\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999019 kubelet[2546]: I1008 19:54:14.999035 2546 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999019 kubelet[2546]: I1008 19:54:14.999055 2546 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999426 kubelet[2546]: I1008 19:54:14.999068 2546 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999426 kubelet[2546]: I1008 19:54:14.999078 2546 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d0164cf-8490-40f3-9f63-c56d4d161565-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999426 kubelet[2546]: I1008 19:54:14.999090 2546 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8cd8253-da9c-4cca-b85f-0457e4cc678e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999426 kubelet[2546]: I1008 19:54:14.999101 2546 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kt5jf\" (UniqueName: \"kubernetes.io/projected/8d0164cf-8490-40f3-9f63-c56d4d161565-kube-api-access-kt5jf\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999426 kubelet[2546]: I1008 19:54:14.999112 2546 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999426 kubelet[2546]: I1008 19:54:14.999123 2546 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999426 kubelet[2546]: I1008 19:54:14.999134 2546 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999426 kubelet[2546]: I1008 19:54:14.999141 2546 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d0164cf-8490-40f3-9f63-c56d4d161565-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:14.999730 kubelet[2546]: I1008 19:54:14.999149 2546 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d0164cf-8490-40f3-9f63-c56d4d161565-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 19:54:15.347424 kubelet[2546]: I1008 19:54:15.347378 2546 scope.go:117] "RemoveContainer" containerID="62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406" Oct 8 19:54:15.351061 containerd[1462]: time="2024-10-08T19:54:15.351023846Z" level=info msg="RemoveContainer for \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\"" Oct 8 19:54:15.355767 systemd[1]: Removed slice kubepods-besteffort-poda8cd8253_da9c_4cca_b85f_0457e4cc678e.slice - libcontainer container kubepods-besteffort-poda8cd8253_da9c_4cca_b85f_0457e4cc678e.slice. Oct 8 19:54:15.358754 systemd[1]: Removed slice kubepods-burstable-pod8d0164cf_8490_40f3_9f63_c56d4d161565.slice - libcontainer container kubepods-burstable-pod8d0164cf_8490_40f3_9f63_c56d4d161565.slice. Oct 8 19:54:15.358990 systemd[1]: kubepods-burstable-pod8d0164cf_8490_40f3_9f63_c56d4d161565.slice: Consumed 7.856s CPU time. Oct 8 19:54:15.429539 containerd[1462]: time="2024-10-08T19:54:15.429479223Z" level=info msg="RemoveContainer for \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\" returns successfully" Oct 8 19:54:15.429959 kubelet[2546]: I1008 19:54:15.429910 2546 scope.go:117] "RemoveContainer" containerID="62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406" Oct 8 19:54:15.434285 containerd[1462]: time="2024-10-08T19:54:15.434197386Z" level=error msg="ContainerStatus for \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\": not found" Oct 8 19:54:15.443695 kubelet[2546]: E1008 19:54:15.443646 2546 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\": not found" containerID="62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406" Oct 8 19:54:15.443805 kubelet[2546]: I1008 19:54:15.443697 2546 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406"} err="failed to get container status \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\": rpc error: code = NotFound desc = an error occurred when try to find container \"62f59201fe2f3fccd198cc8388d1fabdee303d151a215009ed98dd2b4babc406\": not found" Oct 8 19:54:15.443805 kubelet[2546]: I1008 19:54:15.443786 2546 scope.go:117] "RemoveContainer" containerID="1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4" Oct 8 19:54:15.445167 containerd[1462]: time="2024-10-08T19:54:15.445122094Z" level=info msg="RemoveContainer for \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\"" Oct 8 19:54:15.484901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b-rootfs.mount: Deactivated successfully. Oct 8 19:54:15.485040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c-rootfs.mount: Deactivated successfully. Oct 8 19:54:15.485147 systemd[1]: var-lib-kubelet-pods-a8cd8253\x2dda9c\x2d4cca\x2db85f\x2d0457e4cc678e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjs2g.mount: Deactivated successfully. Oct 8 19:54:15.485257 systemd[1]: var-lib-kubelet-pods-8d0164cf\x2d8490\x2d40f3\x2d9f63\x2dc56d4d161565-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkt5jf.mount: Deactivated successfully. Oct 8 19:54:15.485397 systemd[1]: var-lib-kubelet-pods-8d0164cf\x2d8490\x2d40f3\x2d9f63\x2dc56d4d161565-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 19:54:15.485519 systemd[1]: var-lib-kubelet-pods-8d0164cf\x2d8490\x2d40f3\x2d9f63\x2dc56d4d161565-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 19:54:15.607848 containerd[1462]: time="2024-10-08T19:54:15.607305769Z" level=info msg="RemoveContainer for \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\" returns successfully" Oct 8 19:54:15.609313 kubelet[2546]: I1008 19:54:15.607598 2546 scope.go:117] "RemoveContainer" containerID="53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31" Oct 8 19:54:15.609637 containerd[1462]: time="2024-10-08T19:54:15.609613263Z" level=info msg="RemoveContainer for \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\"" Oct 8 19:54:15.687039 containerd[1462]: time="2024-10-08T19:54:15.686963815Z" level=info msg="RemoveContainer for \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\" returns successfully" Oct 8 19:54:15.687343 kubelet[2546]: I1008 19:54:15.687310 2546 scope.go:117] "RemoveContainer" containerID="d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99" Oct 8 19:54:15.688351 containerd[1462]: time="2024-10-08T19:54:15.688329352Z" level=info msg="RemoveContainer for \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\"" Oct 8 19:54:15.719148 containerd[1462]: time="2024-10-08T19:54:15.719032844Z" level=info msg="RemoveContainer for \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\" returns successfully" Oct 8 19:54:15.719385 kubelet[2546]: I1008 19:54:15.719359 2546 scope.go:117] "RemoveContainer" containerID="4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077" Oct 8 19:54:15.720460 containerd[1462]: time="2024-10-08T19:54:15.720422216Z" level=info msg="RemoveContainer for \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\"" Oct 8 19:54:15.775064 containerd[1462]: time="2024-10-08T19:54:15.774991026Z" level=info msg="RemoveContainer for \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\" returns successfully" Oct 8 19:54:15.775383 kubelet[2546]: I1008 19:54:15.775353 2546 scope.go:117] "RemoveContainer" containerID="48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed" Oct 8 19:54:15.776370 containerd[1462]: time="2024-10-08T19:54:15.776337968Z" level=info msg="RemoveContainer for \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\"" Oct 8 19:54:15.970997 containerd[1462]: time="2024-10-08T19:54:15.970847526Z" level=info msg="RemoveContainer for \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\" returns successfully" Oct 8 19:54:15.971226 kubelet[2546]: I1008 19:54:15.971190 2546 scope.go:117] "RemoveContainer" containerID="1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4" Oct 8 19:54:15.971622 containerd[1462]: time="2024-10-08T19:54:15.971517000Z" level=error msg="ContainerStatus for \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\": not found" Oct 8 19:54:15.971794 kubelet[2546]: E1008 19:54:15.971760 2546 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\": not found" containerID="1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4" Oct 8 19:54:15.971851 kubelet[2546]: I1008 19:54:15.971800 2546 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4"} err="failed to get container status \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\": rpc error: code = NotFound desc = an error occurred when try to find container \"1733b3125e457f4f7d5b45949ac4d604864b3f36e2b9ef033643a0ab4c4abae4\": not found" Oct 8 19:54:15.971851 kubelet[2546]: I1008 19:54:15.971836 2546 scope.go:117] "RemoveContainer" containerID="53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31" Oct 8 19:54:15.972112 containerd[1462]: time="2024-10-08T19:54:15.972078971Z" level=error msg="ContainerStatus for \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\": not found" Oct 8 19:54:15.972248 kubelet[2546]: E1008 19:54:15.972213 2546 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\": not found" containerID="53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31" Oct 8 19:54:15.972334 kubelet[2546]: I1008 19:54:15.972245 2546 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31"} err="failed to get container status \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\": rpc error: code = NotFound desc = an error occurred when try to find container \"53af3ff083272c30b90ae5941ba2c66dd1d56d34f86cd1f99b28df7ce3174f31\": not found" Oct 8 19:54:15.972334 kubelet[2546]: I1008 19:54:15.972286 2546 scope.go:117] "RemoveContainer" containerID="d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99" Oct 8 19:54:15.972690 containerd[1462]: time="2024-10-08T19:54:15.972619991Z" level=error msg="ContainerStatus for \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\": not found" Oct 8 19:54:15.972832 kubelet[2546]: E1008 19:54:15.972806 2546 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\": not found" containerID="d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99" Oct 8 19:54:15.972898 kubelet[2546]: I1008 19:54:15.972836 2546 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99"} err="failed to get container status \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\": rpc error: code = NotFound desc = an error occurred when try to find container \"d462f466e25eb0f31a943796d26b4ee67de27e8d8dbaec61d78f338f0c343d99\": not found" Oct 8 19:54:15.972898 kubelet[2546]: I1008 19:54:15.972854 2546 scope.go:117] "RemoveContainer" containerID="4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077" Oct 8 19:54:15.973075 containerd[1462]: time="2024-10-08T19:54:15.973036587Z" level=error msg="ContainerStatus for \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\": not found" Oct 8 19:54:15.973289 kubelet[2546]: E1008 19:54:15.973231 2546 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\": not found" containerID="4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077" Oct 8 19:54:15.973352 kubelet[2546]: I1008 19:54:15.973306 2546 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077"} err="failed to get container status \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\": rpc error: code = NotFound desc = an error occurred when try to find container \"4385dcbd76849a04d3fc12f60a7bfd1f8237bd0335b3cb015c96bb80a8fb1077\": not found" Oct 8 19:54:15.973397 kubelet[2546]: I1008 19:54:15.973354 2546 scope.go:117] "RemoveContainer" containerID="48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed" Oct 8 19:54:15.973635 containerd[1462]: time="2024-10-08T19:54:15.973584422Z" level=error msg="ContainerStatus for \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\": not found" Oct 8 19:54:15.973707 kubelet[2546]: E1008 19:54:15.973679 2546 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\": not found" containerID="48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed" Oct 8 19:54:15.973738 kubelet[2546]: I1008 19:54:15.973705 2546 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed"} err="failed to get container status \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\": rpc error: code = NotFound desc = an error occurred when try to find container \"48ecd504608a641b0eccf3b2d8b1255382f64e402bed60b9f4fcf14bb35b6bed\": not found" Oct 8 19:54:16.074147 kubelet[2546]: I1008 19:54:16.074095 2546 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0164cf-8490-40f3-9f63-c56d4d161565" path="/var/lib/kubelet/pods/8d0164cf-8490-40f3-9f63-c56d4d161565/volumes" Oct 8 19:54:16.075027 kubelet[2546]: I1008 19:54:16.074998 2546 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8cd8253-da9c-4cca-b85f-0457e4cc678e" path="/var/lib/kubelet/pods/a8cd8253-da9c-4cca-b85f-0457e4cc678e/volumes" Oct 8 19:54:16.142449 sshd[4240]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:16.150254 systemd[1]: sshd@26-10.0.0.24:22-10.0.0.1:58496.service: Deactivated successfully. Oct 8 19:54:16.152383 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:54:16.154231 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:54:16.155497 kubelet[2546]: E1008 19:54:16.155465 2546 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:54:16.162559 systemd[1]: Started sshd@27-10.0.0.24:22-10.0.0.1:58502.service - OpenSSH per-connection server daemon (10.0.0.1:58502). Oct 8 19:54:16.163587 systemd-logind[1443]: Removed session 27. Oct 8 19:54:16.193956 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 58502 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:16.195942 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:16.201558 systemd-logind[1443]: New session 28 of user core. Oct 8 19:54:16.211443 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 19:54:18.097879 sshd[4403]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:18.112375 systemd[1]: sshd@27-10.0.0.24:22-10.0.0.1:58502.service: Deactivated successfully. Oct 8 19:54:18.115407 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 19:54:18.118286 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. Oct 8 19:54:18.132772 systemd[1]: Started sshd@28-10.0.0.24:22-10.0.0.1:58514.service - OpenSSH per-connection server daemon (10.0.0.1:58514). Oct 8 19:54:18.133861 systemd-logind[1443]: Removed session 28. Oct 8 19:54:18.161207 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 58514 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:18.163652 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:18.169582 systemd-logind[1443]: New session 29 of user core. Oct 8 19:54:18.181401 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 19:54:18.236796 sshd[4418]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:18.250408 systemd[1]: sshd@28-10.0.0.24:22-10.0.0.1:58514.service: Deactivated successfully. Oct 8 19:54:18.253571 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 19:54:18.256021 systemd-logind[1443]: Session 29 logged out. Waiting for processes to exit. Oct 8 19:54:18.263078 systemd[1]: Started sshd@29-10.0.0.24:22-10.0.0.1:58526.service - OpenSSH per-connection server daemon (10.0.0.1:58526). Oct 8 19:54:18.264420 systemd-logind[1443]: Removed session 29. Oct 8 19:54:18.289593 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 58526 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:54:18.291136 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:18.295553 systemd-logind[1443]: New session 30 of user core. Oct 8 19:54:18.303447 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 8 19:54:18.540443 kubelet[2546]: I1008 19:54:18.540381 2546 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T19:54:18Z","lastTransitionTime":"2024-10-08T19:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 19:54:18.813742 kubelet[2546]: E1008 19:54:18.813560 2546 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d0164cf-8490-40f3-9f63-c56d4d161565" containerName="mount-cgroup" Oct 8 19:54:18.813742 kubelet[2546]: E1008 19:54:18.813596 2546 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d0164cf-8490-40f3-9f63-c56d4d161565" containerName="apply-sysctl-overwrites" Oct 8 19:54:18.813742 kubelet[2546]: E1008 19:54:18.813603 2546 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8cd8253-da9c-4cca-b85f-0457e4cc678e" containerName="cilium-operator" Oct 8 19:54:18.813742 kubelet[2546]: E1008 19:54:18.813651 2546 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d0164cf-8490-40f3-9f63-c56d4d161565" containerName="mount-bpf-fs" Oct 8 19:54:18.813742 kubelet[2546]: E1008 19:54:18.813659 2546 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d0164cf-8490-40f3-9f63-c56d4d161565" containerName="clean-cilium-state" Oct 8 19:54:18.813742 kubelet[2546]: E1008 19:54:18.813666 2546 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d0164cf-8490-40f3-9f63-c56d4d161565" containerName="cilium-agent" Oct 8 19:54:18.813742 kubelet[2546]: I1008 19:54:18.813689 2546 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0164cf-8490-40f3-9f63-c56d4d161565" containerName="cilium-agent" Oct 8 19:54:18.813742 kubelet[2546]: I1008 19:54:18.813697 2546 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8cd8253-da9c-4cca-b85f-0457e4cc678e" containerName="cilium-operator" Oct 8 19:54:18.822596 systemd[1]: Created slice kubepods-burstable-pod4ec81ab5_47c9_4a34_8e35_ec56b1636a58.slice - libcontainer container kubepods-burstable-pod4ec81ab5_47c9_4a34_8e35_ec56b1636a58.slice. Oct 8 19:54:18.923565 kubelet[2546]: I1008 19:54:18.923471 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-cni-path\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923565 kubelet[2546]: I1008 19:54:18.923554 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-lib-modules\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923565 kubelet[2546]: I1008 19:54:18.923576 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-host-proc-sys-kernel\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923748 kubelet[2546]: I1008 19:54:18.923646 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-cilium-cgroup\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923748 kubelet[2546]: I1008 19:54:18.923689 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-cilium-config-path\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923748 kubelet[2546]: I1008 19:54:18.923705 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-hostproc\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923748 kubelet[2546]: I1008 19:54:18.923727 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-hubble-tls\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923748 kubelet[2546]: I1008 19:54:18.923745 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdczd\" (UniqueName: \"kubernetes.io/projected/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-kube-api-access-sdczd\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923911 kubelet[2546]: I1008 19:54:18.923762 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-etc-cni-netd\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923911 kubelet[2546]: I1008 19:54:18.923778 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-clustermesh-secrets\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923911 kubelet[2546]: I1008 19:54:18.923793 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-bpf-maps\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923911 kubelet[2546]: I1008 19:54:18.923807 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-cilium-run\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923911 kubelet[2546]: I1008 19:54:18.923821 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-xtables-lock\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.923911 kubelet[2546]: I1008 19:54:18.923838 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-cilium-ipsec-secrets\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:18.924154 kubelet[2546]: I1008 19:54:18.923853 2546 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ec81ab5-47c9-4a34-8e35-ec56b1636a58-host-proc-sys-net\") pod \"cilium-lmjq6\" (UID: \"4ec81ab5-47c9-4a34-8e35-ec56b1636a58\") " pod="kube-system/cilium-lmjq6" Oct 8 19:54:20.025817 kubelet[2546]: E1008 19:54:20.025750 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:20.026575 containerd[1462]: time="2024-10-08T19:54:20.026530258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmjq6,Uid:4ec81ab5-47c9-4a34-8e35-ec56b1636a58,Namespace:kube-system,Attempt:0,}" Oct 8 19:54:20.696037 containerd[1462]: time="2024-10-08T19:54:20.695901082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:54:20.696037 containerd[1462]: time="2024-10-08T19:54:20.695986132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:54:20.696037 containerd[1462]: time="2024-10-08T19:54:20.696001622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:20.696367 containerd[1462]: time="2024-10-08T19:54:20.696094296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:20.729579 systemd[1]: Started cri-containerd-9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e.scope - libcontainer container 9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e. Oct 8 19:54:20.755577 containerd[1462]: time="2024-10-08T19:54:20.755517316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmjq6,Uid:4ec81ab5-47c9-4a34-8e35-ec56b1636a58,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\"" Oct 8 19:54:20.756377 kubelet[2546]: E1008 19:54:20.756341 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:20.758842 containerd[1462]: time="2024-10-08T19:54:20.758801703Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:54:21.090070 containerd[1462]: time="2024-10-08T19:54:21.089992908Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"992a37fb32f33f070c9d135c27f05ab03882a7d216363b8c35aa3eb691cec090\"" Oct 8 19:54:21.090739 containerd[1462]: time="2024-10-08T19:54:21.090700553Z" level=info msg="StartContainer for \"992a37fb32f33f070c9d135c27f05ab03882a7d216363b8c35aa3eb691cec090\"" Oct 8 19:54:21.122494 systemd[1]: Started cri-containerd-992a37fb32f33f070c9d135c27f05ab03882a7d216363b8c35aa3eb691cec090.scope - libcontainer container 992a37fb32f33f070c9d135c27f05ab03882a7d216363b8c35aa3eb691cec090. Oct 8 19:54:21.156567 kubelet[2546]: E1008 19:54:21.156507 2546 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:54:21.162172 systemd[1]: cri-containerd-992a37fb32f33f070c9d135c27f05ab03882a7d216363b8c35aa3eb691cec090.scope: Deactivated successfully. Oct 8 19:54:21.315768 containerd[1462]: time="2024-10-08T19:54:21.315636552Z" level=info msg="StartContainer for \"992a37fb32f33f070c9d135c27f05ab03882a7d216363b8c35aa3eb691cec090\" returns successfully" Oct 8 19:54:21.366529 kubelet[2546]: E1008 19:54:21.366330 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:21.557457 containerd[1462]: time="2024-10-08T19:54:21.557358917Z" level=info msg="shim disconnected" id=992a37fb32f33f070c9d135c27f05ab03882a7d216363b8c35aa3eb691cec090 namespace=k8s.io Oct 8 19:54:21.557457 containerd[1462]: time="2024-10-08T19:54:21.557437025Z" level=warning msg="cleaning up after shim disconnected" id=992a37fb32f33f070c9d135c27f05ab03882a7d216363b8c35aa3eb691cec090 namespace=k8s.io Oct 8 19:54:21.557457 containerd[1462]: time="2024-10-08T19:54:21.557464376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:22.370036 kubelet[2546]: E1008 19:54:22.369997 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:22.380798 containerd[1462]: time="2024-10-08T19:54:22.380735597Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:54:22.403380 containerd[1462]: time="2024-10-08T19:54:22.403218365Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f\"" Oct 8 19:54:22.404071 containerd[1462]: time="2024-10-08T19:54:22.404036619Z" level=info msg="StartContainer for \"98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f\"" Oct 8 19:54:22.434446 systemd[1]: Started cri-containerd-98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f.scope - libcontainer container 98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f. Oct 8 19:54:22.471558 systemd[1]: cri-containerd-98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f.scope: Deactivated successfully. Oct 8 19:54:22.484513 containerd[1462]: time="2024-10-08T19:54:22.484445720Z" level=info msg="StartContainer for \"98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f\" returns successfully" Oct 8 19:54:22.518636 containerd[1462]: time="2024-10-08T19:54:22.518558885Z" level=info msg="shim disconnected" id=98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f namespace=k8s.io Oct 8 19:54:22.518636 containerd[1462]: time="2024-10-08T19:54:22.518628948Z" level=warning msg="cleaning up after shim disconnected" id=98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f namespace=k8s.io Oct 8 19:54:22.518636 containerd[1462]: time="2024-10-08T19:54:22.518640549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:22.684686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98c53e85c3f22d77d387978070f916290362c1c16108207b577d3eff89c6f23f-rootfs.mount: Deactivated successfully. Oct 8 19:54:23.373971 kubelet[2546]: E1008 19:54:23.373932 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:23.375849 containerd[1462]: time="2024-10-08T19:54:23.375785027Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:54:23.409634 containerd[1462]: time="2024-10-08T19:54:23.409565190Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420\"" Oct 8 19:54:23.410360 containerd[1462]: time="2024-10-08T19:54:23.410294596Z" level=info msg="StartContainer for \"7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420\"" Oct 8 19:54:23.453524 systemd[1]: Started cri-containerd-7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420.scope - libcontainer container 7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420. Oct 8 19:54:23.487549 containerd[1462]: time="2024-10-08T19:54:23.487487592Z" level=info msg="StartContainer for \"7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420\" returns successfully" Oct 8 19:54:23.487913 systemd[1]: cri-containerd-7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420.scope: Deactivated successfully. Oct 8 19:54:23.515687 containerd[1462]: time="2024-10-08T19:54:23.515602119Z" level=info msg="shim disconnected" id=7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420 namespace=k8s.io Oct 8 19:54:23.515687 containerd[1462]: time="2024-10-08T19:54:23.515666421Z" level=warning msg="cleaning up after shim disconnected" id=7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420 namespace=k8s.io Oct 8 19:54:23.515687 containerd[1462]: time="2024-10-08T19:54:23.515676139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:23.684342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ebc223fd7155d397ab10ab88e82f168f67c63129c93b9a2150421bedf692420-rootfs.mount: Deactivated successfully. Oct 8 19:54:24.377600 kubelet[2546]: E1008 19:54:24.377535 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:24.380271 containerd[1462]: time="2024-10-08T19:54:24.380213202Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:54:24.641663 containerd[1462]: time="2024-10-08T19:54:24.641423438Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9\"" Oct 8 19:54:24.642354 containerd[1462]: time="2024-10-08T19:54:24.642302086Z" level=info msg="StartContainer for \"e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9\"" Oct 8 19:54:24.676493 systemd[1]: Started cri-containerd-e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9.scope - libcontainer container e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9. Oct 8 19:54:24.708153 systemd[1]: cri-containerd-e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9.scope: Deactivated successfully. Oct 8 19:54:24.717436 containerd[1462]: time="2024-10-08T19:54:24.717370195Z" level=info msg="StartContainer for \"e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9\" returns successfully" Oct 8 19:54:24.736850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9-rootfs.mount: Deactivated successfully. Oct 8 19:54:24.744771 containerd[1462]: time="2024-10-08T19:54:24.744694730Z" level=info msg="shim disconnected" id=e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9 namespace=k8s.io Oct 8 19:54:24.744771 containerd[1462]: time="2024-10-08T19:54:24.744764301Z" level=warning msg="cleaning up after shim disconnected" id=e17f5a9438b1c94b577c1550bf564d0985affcf827f268202cfb79a614384fb9 namespace=k8s.io Oct 8 19:54:24.744771 containerd[1462]: time="2024-10-08T19:54:24.744773458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:25.382160 kubelet[2546]: E1008 19:54:25.382103 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:25.384705 containerd[1462]: time="2024-10-08T19:54:25.384642540Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:54:25.509634 containerd[1462]: time="2024-10-08T19:54:25.509580542Z" level=info msg="CreateContainer within sandbox \"9f0e868b19f51cff8ba08d02177457fdb836eaed72725ba20a33db89cdd74a9e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dbed4ed95cb8bf9b52a87bf0ad5626f91bd8917900e2068a67f01263e8ec1c85\"" Oct 8 19:54:25.510523 containerd[1462]: time="2024-10-08T19:54:25.510446345Z" level=info msg="StartContainer for \"dbed4ed95cb8bf9b52a87bf0ad5626f91bd8917900e2068a67f01263e8ec1c85\"" Oct 8 19:54:25.552589 systemd[1]: Started cri-containerd-dbed4ed95cb8bf9b52a87bf0ad5626f91bd8917900e2068a67f01263e8ec1c85.scope - libcontainer container dbed4ed95cb8bf9b52a87bf0ad5626f91bd8917900e2068a67f01263e8ec1c85. Oct 8 19:54:25.680895 containerd[1462]: time="2024-10-08T19:54:25.680768246Z" level=info msg="StartContainer for \"dbed4ed95cb8bf9b52a87bf0ad5626f91bd8917900e2068a67f01263e8ec1c85\" returns successfully" Oct 8 19:54:26.077341 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 8 19:54:26.389510 kubelet[2546]: E1008 19:54:26.389343 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:28.026869 kubelet[2546]: E1008 19:54:28.026818 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:29.527076 systemd-networkd[1387]: lxc_health: Link UP Oct 8 19:54:29.535026 systemd-networkd[1387]: lxc_health: Gained carrier Oct 8 19:54:30.027806 kubelet[2546]: E1008 19:54:30.027764 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:30.046528 kubelet[2546]: I1008 19:54:30.046458 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmjq6" podStartSLOduration=12.046440647 podStartE2EDuration="12.046440647s" podCreationTimestamp="2024-10-08 19:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:54:26.41396227 +0000 UTC m=+110.434554475" watchObservedRunningTime="2024-10-08 19:54:30.046440647 +0000 UTC m=+114.067032833" Oct 8 19:54:30.396836 kubelet[2546]: E1008 19:54:30.396647 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:30.572081 systemd[1]: run-containerd-runc-k8s.io-dbed4ed95cb8bf9b52a87bf0ad5626f91bd8917900e2068a67f01263e8ec1c85-runc.45Hvua.mount: Deactivated successfully. Oct 8 19:54:31.325554 systemd-networkd[1387]: lxc_health: Gained IPv6LL Oct 8 19:54:31.398312 kubelet[2546]: E1008 19:54:31.398218 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:32.072004 kubelet[2546]: E1008 19:54:32.070868 2546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:36.068678 containerd[1462]: time="2024-10-08T19:54:36.068604253Z" level=info msg="StopPodSandbox for \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\"" Oct 8 19:54:36.069243 containerd[1462]: time="2024-10-08T19:54:36.068757994Z" level=info msg="TearDown network for sandbox \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\" successfully" Oct 8 19:54:36.069243 containerd[1462]: time="2024-10-08T19:54:36.068778683Z" level=info msg="StopPodSandbox for \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\" returns successfully" Oct 8 19:54:36.069345 containerd[1462]: time="2024-10-08T19:54:36.069307950Z" level=info msg="RemovePodSandbox for \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\"" Oct 8 19:54:36.069384 containerd[1462]: time="2024-10-08T19:54:36.069354948Z" level=info msg="Forcibly stopping sandbox \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\"" Oct 8 19:54:36.069436 containerd[1462]: time="2024-10-08T19:54:36.069428778Z" level=info msg="TearDown network for sandbox \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\" successfully" Oct 8 19:54:36.261711 containerd[1462]: time="2024-10-08T19:54:36.261639111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:54:36.261904 containerd[1462]: time="2024-10-08T19:54:36.261738257Z" level=info msg="RemovePodSandbox \"7609d18180aa4ed194977ef55738a5e2c3ae1018c976baf3eeb7145fca7fb20b\" returns successfully" Oct 8 19:54:36.262524 containerd[1462]: time="2024-10-08T19:54:36.262482761Z" level=info msg="StopPodSandbox for \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\"" Oct 8 19:54:36.262643 containerd[1462]: time="2024-10-08T19:54:36.262617024Z" level=info msg="TearDown network for sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" successfully" Oct 8 19:54:36.262643 containerd[1462]: time="2024-10-08T19:54:36.262638985Z" level=info msg="StopPodSandbox for \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" returns successfully" Oct 8 19:54:36.263345 containerd[1462]: time="2024-10-08T19:54:36.263288880Z" level=info msg="RemovePodSandbox for \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\"" Oct 8 19:54:36.263395 containerd[1462]: time="2024-10-08T19:54:36.263355095Z" level=info msg="Forcibly stopping sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\"" Oct 8 19:54:36.263484 containerd[1462]: time="2024-10-08T19:54:36.263458089Z" level=info msg="TearDown network for sandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" successfully" Oct 8 19:54:36.328468 containerd[1462]: time="2024-10-08T19:54:36.328201586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:54:36.328468 containerd[1462]: time="2024-10-08T19:54:36.328297377Z" level=info msg="RemovePodSandbox \"68a44df5d4c3537d64922eda3f0c8c7b9e6eb2499cd79f551524c3f08f2bf62c\" returns successfully" Oct 8 19:54:36.996105 sshd[4426]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:37.002155 systemd[1]: sshd@29-10.0.0.24:22-10.0.0.1:58526.service: Deactivated successfully. Oct 8 19:54:37.004438 systemd[1]: session-30.scope: Deactivated successfully. Oct 8 19:54:37.005063 systemd-logind[1443]: Session 30 logged out. Waiting for processes to exit. Oct 8 19:54:37.005987 systemd-logind[1443]: Removed session 30.