Oct 8 19:56:51.922903 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 19:56:51.922929 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:56:51.922943 kernel: BIOS-provided physical RAM map: Oct 8 19:56:51.922951 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 19:56:51.922959 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 19:56:51.922967 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 19:56:51.922977 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 8 19:56:51.922985 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 8 19:56:51.922993 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 19:56:51.923004 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 8 19:56:51.923013 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 19:56:51.923021 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 19:56:51.923029 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 8 19:56:51.923037 kernel: NX (Execute Disable) protection: active Oct 8 19:56:51.923047 kernel: APIC: Static calls initialized Oct 8 19:56:51.923059 kernel: SMBIOS 2.8 present. Oct 8 19:56:51.923068 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 8 19:56:51.923077 kernel: Hypervisor detected: KVM Oct 8 19:56:51.923085 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 19:56:51.923106 kernel: kvm-clock: using sched offset of 2583029543 cycles Oct 8 19:56:51.923117 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 19:56:51.923127 kernel: tsc: Detected 2794.748 MHz processor Oct 8 19:56:51.923136 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 19:56:51.923154 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 19:56:51.923180 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 8 19:56:51.923202 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 19:56:51.923218 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 19:56:51.923237 kernel: Using GB pages for direct mapping Oct 8 19:56:51.923245 kernel: ACPI: Early table checksum verification disabled Oct 8 19:56:51.923255 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 8 19:56:51.923276 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:51.923285 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:51.923294 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:51.923307 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 8 19:56:51.923317 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:51.923326 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:51.923335 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:51.923345 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:51.923354 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Oct 8 19:56:51.923364 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Oct 8 19:56:51.923375 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 8 19:56:51.923384 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Oct 8 19:56:51.923391 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Oct 8 19:56:51.923399 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Oct 8 19:56:51.923406 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Oct 8 19:56:51.923413 kernel: No NUMA configuration found Oct 8 19:56:51.923420 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 8 19:56:51.923429 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 8 19:56:51.923436 kernel: Zone ranges: Oct 8 19:56:51.923444 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 19:56:51.923451 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 8 19:56:51.923458 kernel: Normal empty Oct 8 19:56:51.923465 kernel: Movable zone start for each node Oct 8 19:56:51.923472 kernel: Early memory node ranges Oct 8 19:56:51.923479 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 19:56:51.923486 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 8 19:56:51.923493 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 8 19:56:51.923502 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:56:51.923510 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 19:56:51.923517 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 8 19:56:51.923524 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 19:56:51.923531 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 19:56:51.923538 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 19:56:51.923545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 19:56:51.923553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 19:56:51.923560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 19:56:51.923569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 19:56:51.923576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 19:56:51.923583 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 19:56:51.923590 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 19:56:51.923597 kernel: TSC deadline timer available Oct 8 19:56:51.923604 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 8 19:56:51.923611 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 19:56:51.923618 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 8 19:56:51.923625 kernel: kvm-guest: setup PV sched yield Oct 8 19:56:51.923635 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 8 19:56:51.923642 kernel: Booting paravirtualized kernel on KVM Oct 8 19:56:51.923649 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 19:56:51.923657 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 8 19:56:51.923664 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 8 19:56:51.923671 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 8 19:56:51.923678 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 8 19:56:51.923685 kernel: kvm-guest: PV spinlocks enabled Oct 8 19:56:51.923692 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 19:56:51.923702 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:56:51.923710 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:56:51.923717 kernel: random: crng init done Oct 8 19:56:51.923724 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:56:51.923732 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:56:51.923739 kernel: Fallback order for Node 0: 0 Oct 8 19:56:51.923746 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 8 19:56:51.923753 kernel: Policy zone: DMA32 Oct 8 19:56:51.923760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:56:51.923769 kernel: Memory: 2434596K/2571752K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 136896K reserved, 0K cma-reserved) Oct 8 19:56:51.923777 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:56:51.923784 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 19:56:51.923791 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 19:56:51.923798 kernel: Dynamic Preempt: voluntary Oct 8 19:56:51.923805 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:56:51.923813 kernel: rcu: RCU event tracing is enabled. Oct 8 19:56:51.923830 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:56:51.923840 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:56:51.923847 kernel: Rude variant of Tasks RCU enabled. Oct 8 19:56:51.923854 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:56:51.923861 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:56:51.923868 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:56:51.923875 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 8 19:56:51.923882 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:56:51.923889 kernel: Console: colour VGA+ 80x25 Oct 8 19:56:51.923896 kernel: printk: console [ttyS0] enabled Oct 8 19:56:51.923903 kernel: ACPI: Core revision 20230628 Oct 8 19:56:51.923913 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 19:56:51.923920 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 19:56:51.923927 kernel: x2apic enabled Oct 8 19:56:51.923934 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 19:56:51.923941 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 8 19:56:51.923948 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 8 19:56:51.923956 kernel: kvm-guest: setup PV IPIs Oct 8 19:56:51.923972 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 19:56:51.923979 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 19:56:51.923987 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 8 19:56:51.923994 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 19:56:51.924004 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 19:56:51.924011 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 19:56:51.924019 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 19:56:51.924026 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 19:56:51.924036 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 19:56:51.924048 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 19:56:51.924058 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 19:56:51.924068 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 19:56:51.924078 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 19:56:51.924088 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 19:56:51.924098 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 19:56:51.924109 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 19:56:51.924120 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 19:56:51.924134 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 19:56:51.924145 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 19:56:51.924155 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 19:56:51.924166 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 19:56:51.924175 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 19:56:51.924183 kernel: Freeing SMP alternatives memory: 32K Oct 8 19:56:51.924192 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:56:51.924202 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:56:51.924212 kernel: landlock: Up and running. Oct 8 19:56:51.924226 kernel: SELinux: Initializing. Oct 8 19:56:51.924240 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:56:51.924252 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:56:51.924275 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 19:56:51.924285 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:56:51.924295 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:56:51.924305 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:56:51.924315 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 19:56:51.924325 kernel: ... version: 0 Oct 8 19:56:51.924338 kernel: ... bit width: 48 Oct 8 19:56:51.924346 kernel: ... generic registers: 6 Oct 8 19:56:51.924353 kernel: ... value mask: 0000ffffffffffff Oct 8 19:56:51.924361 kernel: ... max period: 00007fffffffffff Oct 8 19:56:51.924368 kernel: ... fixed-purpose events: 0 Oct 8 19:56:51.924375 kernel: ... event mask: 000000000000003f Oct 8 19:56:51.924383 kernel: signal: max sigframe size: 1776 Oct 8 19:56:51.924390 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:56:51.924399 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:56:51.924410 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:56:51.924419 kernel: smpboot: x86: Booting SMP configuration: Oct 8 19:56:51.924427 kernel: .... node #0, CPUs: #1 #2 #3 Oct 8 19:56:51.924436 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:56:51.924443 kernel: smpboot: Max logical packages: 1 Oct 8 19:56:51.924451 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 8 19:56:51.924458 kernel: devtmpfs: initialized Oct 8 19:56:51.924465 kernel: x86/mm: Memory block size: 128MB Oct 8 19:56:51.924473 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:56:51.924480 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:56:51.924490 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:56:51.924498 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:56:51.924505 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:56:51.924513 kernel: audit: type=2000 audit(1728417411.166:1): state=initialized audit_enabled=0 res=1 Oct 8 19:56:51.924520 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:56:51.924527 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 19:56:51.924535 kernel: cpuidle: using governor menu Oct 8 19:56:51.924542 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:56:51.924550 kernel: dca service started, version 1.12.1 Oct 8 19:56:51.924560 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 19:56:51.924567 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 8 19:56:51.924574 kernel: PCI: Using configuration type 1 for base access Oct 8 19:56:51.924582 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 19:56:51.924589 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:56:51.924597 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:56:51.924604 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:56:51.924612 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:56:51.924622 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:56:51.924629 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:56:51.924636 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:56:51.924644 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:56:51.924656 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:56:51.924666 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 19:56:51.924681 kernel: ACPI: Interpreter enabled Oct 8 19:56:51.924695 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 19:56:51.924712 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 19:56:51.924726 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 19:56:51.924747 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 19:56:51.924764 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 19:56:51.924778 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:56:51.925051 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:56:51.925248 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 19:56:51.925407 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 19:56:51.925421 kernel: PCI host bridge to bus 0000:00 Oct 8 19:56:51.925585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 19:56:51.925723 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 19:56:51.925872 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 19:56:51.926014 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 8 19:56:51.926147 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 19:56:51.926299 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 8 19:56:51.926468 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:56:51.926720 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 19:56:51.926909 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 8 19:56:51.927074 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 8 19:56:51.927237 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 8 19:56:51.927419 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 8 19:56:51.927637 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 19:56:51.927982 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:56:51.928158 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 8 19:56:51.928350 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 8 19:56:51.928508 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 8 19:56:51.928641 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 8 19:56:51.928764 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 8 19:56:51.928896 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 8 19:56:51.929026 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 8 19:56:51.929183 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 19:56:51.929353 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 8 19:56:51.929483 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 8 19:56:51.929605 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 8 19:56:51.929725 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 8 19:56:51.929864 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 19:56:51.929991 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 19:56:51.930133 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 19:56:51.930604 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 8 19:56:51.930749 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 8 19:56:51.930924 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 19:56:51.931087 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 8 19:56:51.931107 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 19:56:51.931119 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 19:56:51.931129 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 19:56:51.931139 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 19:56:51.931150 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 19:56:51.931161 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 19:56:51.931171 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 19:56:51.931181 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 19:56:51.931191 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 19:56:51.931205 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 19:56:51.931215 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 19:56:51.931226 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 19:56:51.931237 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 19:56:51.931247 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 19:56:51.931257 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 19:56:51.931285 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 19:56:51.931296 kernel: iommu: Default domain type: Translated Oct 8 19:56:51.931307 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 19:56:51.931321 kernel: PCI: Using ACPI for IRQ routing Oct 8 19:56:51.931332 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 19:56:51.931343 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 19:56:51.931353 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 8 19:56:51.931518 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 19:56:51.931641 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 19:56:51.931762 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 19:56:51.931772 kernel: vgaarb: loaded Oct 8 19:56:51.931780 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 19:56:51.931792 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 19:56:51.931801 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 19:56:51.931809 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:56:51.931826 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:56:51.931835 kernel: pnp: PnP ACPI init Oct 8 19:56:51.931969 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 19:56:51.931980 kernel: pnp: PnP ACPI: found 6 devices Oct 8 19:56:51.931989 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 19:56:51.932000 kernel: NET: Registered PF_INET protocol family Oct 8 19:56:51.932007 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:56:51.932015 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:56:51.932024 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:56:51.932031 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:56:51.932039 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:56:51.932047 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:56:51.932055 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:56:51.932063 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:56:51.932073 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:56:51.932081 kernel: NET: Registered PF_XDP protocol family Oct 8 19:56:51.932226 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 19:56:51.932394 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 19:56:51.932542 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 19:56:51.932692 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 8 19:56:51.932811 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 19:56:51.932933 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 8 19:56:51.932948 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:56:51.932956 kernel: Initialise system trusted keyrings Oct 8 19:56:51.932964 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:56:51.932979 kernel: Key type asymmetric registered Oct 8 19:56:51.932992 kernel: Asymmetric key parser 'x509' registered Oct 8 19:56:51.933002 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 19:56:51.933013 kernel: io scheduler mq-deadline registered Oct 8 19:56:51.933024 kernel: io scheduler kyber registered Oct 8 19:56:51.933034 kernel: io scheduler bfq registered Oct 8 19:56:51.933047 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 19:56:51.933055 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 19:56:51.933063 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 19:56:51.933071 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 8 19:56:51.933080 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:56:51.933097 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 19:56:51.933109 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 19:56:51.933127 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 19:56:51.933137 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 19:56:51.933331 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 19:56:51.933347 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 19:56:51.933485 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 19:56:51.933608 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T19:56:51 UTC (1728417411) Oct 8 19:56:51.933720 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 19:56:51.933730 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 19:56:51.933738 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:56:51.933746 kernel: Segment Routing with IPv6 Oct 8 19:56:51.933758 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:56:51.933766 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:56:51.933774 kernel: Key type dns_resolver registered Oct 8 19:56:51.933781 kernel: IPI shorthand broadcast: enabled Oct 8 19:56:51.933789 kernel: sched_clock: Marking stable (693003937, 123819678)->(838601547, -21777932) Oct 8 19:56:51.933797 kernel: registered taskstats version 1 Oct 8 19:56:51.933805 kernel: Loading compiled-in X.509 certificates Oct 8 19:56:51.933813 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 19:56:51.933829 kernel: Key type .fscrypt registered Oct 8 19:56:51.933840 kernel: Key type fscrypt-provisioning registered Oct 8 19:56:51.933847 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:56:51.933855 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:56:51.933863 kernel: ima: No architecture policies found Oct 8 19:56:51.933870 kernel: clk: Disabling unused clocks Oct 8 19:56:51.933884 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 19:56:51.933897 kernel: Write protecting the kernel read-only data: 36864k Oct 8 19:56:51.933905 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 19:56:51.933914 kernel: Run /init as init process Oct 8 19:56:51.933925 kernel: with arguments: Oct 8 19:56:51.933932 kernel: /init Oct 8 19:56:51.933940 kernel: with environment: Oct 8 19:56:51.933947 kernel: HOME=/ Oct 8 19:56:51.933955 kernel: TERM=linux Oct 8 19:56:51.933963 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:56:51.933973 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:56:51.933984 systemd[1]: Detected virtualization kvm. Oct 8 19:56:51.933995 systemd[1]: Detected architecture x86-64. Oct 8 19:56:51.934003 systemd[1]: Running in initrd. Oct 8 19:56:51.934011 systemd[1]: No hostname configured, using default hostname. Oct 8 19:56:51.934018 systemd[1]: Hostname set to . Oct 8 19:56:51.934027 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:56:51.934035 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:56:51.934043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:56:51.934052 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:56:51.934064 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:56:51.934085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:56:51.934097 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:56:51.934107 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:56:51.934126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:56:51.934146 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:56:51.934159 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:56:51.934168 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:56:51.934176 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:56:51.934185 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:56:51.934193 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:56:51.934202 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:56:51.934211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:56:51.934232 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:56:51.934244 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:56:51.934255 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:56:51.934288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:56:51.934300 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:56:51.934311 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:56:51.934321 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:56:51.934329 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:56:51.934342 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:56:51.934351 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:56:51.934359 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:56:51.934367 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:56:51.934376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:56:51.934384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:56:51.934393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:56:51.934401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:56:51.934409 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:56:51.934421 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:56:51.934458 systemd-journald[193]: Collecting audit messages is disabled. Oct 8 19:56:51.934483 systemd-journald[193]: Journal started Oct 8 19:56:51.934504 systemd-journald[193]: Runtime Journal (/run/log/journal/5b5ebba9ab3d426f9e1ce6b7403069ea) is 6.0M, max 48.4M, 42.3M free. Oct 8 19:56:51.925219 systemd-modules-load[194]: Inserted module 'overlay' Oct 8 19:56:51.965232 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:56:51.965287 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:56:51.965306 kernel: Bridge firewalling registered Oct 8 19:56:51.958715 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 8 19:56:51.965706 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:56:51.968093 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:51.971061 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:56:51.997457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:56:51.998283 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:56:51.999158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:56:52.004399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:56:52.013620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:56:52.016338 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:56:52.019494 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:56:52.022389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:56:52.038466 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:56:52.041564 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:56:52.052954 dracut-cmdline[228]: dracut-dracut-053 Oct 8 19:56:52.056996 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:56:52.084887 systemd-resolved[231]: Positive Trust Anchors: Oct 8 19:56:52.084909 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:56:52.084947 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:56:52.087761 systemd-resolved[231]: Defaulting to hostname 'linux'. Oct 8 19:56:52.088956 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:56:52.095368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:56:52.180306 kernel: SCSI subsystem initialized Oct 8 19:56:52.192292 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:56:52.203296 kernel: iscsi: registered transport (tcp) Oct 8 19:56:52.226303 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:56:52.226370 kernel: QLogic iSCSI HBA Driver Oct 8 19:56:52.276680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:56:52.283424 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:56:52.309292 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:56:52.309334 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:56:52.310857 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:56:52.351307 kernel: raid6: avx2x4 gen() 29206 MB/s Oct 8 19:56:52.368299 kernel: raid6: avx2x2 gen() 28958 MB/s Oct 8 19:56:52.385460 kernel: raid6: avx2x1 gen() 22916 MB/s Oct 8 19:56:52.385494 kernel: raid6: using algorithm avx2x4 gen() 29206 MB/s Oct 8 19:56:52.403440 kernel: raid6: .... xor() 6911 MB/s, rmw enabled Oct 8 19:56:52.403485 kernel: raid6: using avx2x2 recovery algorithm Oct 8 19:56:52.424301 kernel: xor: automatically using best checksumming function avx Oct 8 19:56:52.587295 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:56:52.598757 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:56:52.611402 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:56:52.623698 systemd-udevd[414]: Using default interface naming scheme 'v255'. Oct 8 19:56:52.628490 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:56:52.640578 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:56:52.652831 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Oct 8 19:56:52.682564 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:56:52.692432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:56:52.759805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:56:52.769479 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:56:52.783114 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:56:52.783730 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:56:52.784092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:56:52.784720 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:56:52.794512 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:56:52.805987 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:56:52.818358 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 8 19:56:52.823710 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:56:52.826302 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 19:56:52.826328 kernel: libata version 3.00 loaded. Oct 8 19:56:52.829932 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:56:52.829961 kernel: GPT:9289727 != 19775487 Oct 8 19:56:52.829983 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:56:52.829997 kernel: GPT:9289727 != 19775487 Oct 8 19:56:52.831470 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:56:52.831512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:56:52.837280 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 19:56:52.840313 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 19:56:52.840338 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 19:56:52.840348 kernel: AES CTR mode by8 optimization enabled Oct 8 19:56:52.840358 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 19:56:52.841292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:56:52.842824 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 19:56:52.841496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:56:52.858904 kernel: scsi host0: ahci Oct 8 19:56:52.859153 kernel: scsi host1: ahci Oct 8 19:56:52.860570 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:56:52.864680 kernel: scsi host2: ahci Oct 8 19:56:52.860645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:56:52.869905 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (477) Oct 8 19:56:52.869929 kernel: scsi host3: ahci Oct 8 19:56:52.860840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:52.866312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:56:52.875283 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (470) Oct 8 19:56:52.875323 kernel: scsi host4: ahci Oct 8 19:56:52.877285 kernel: scsi host5: ahci Oct 8 19:56:52.877534 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 8 19:56:52.878310 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 8 19:56:52.879388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:56:52.886508 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 8 19:56:52.886534 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 8 19:56:52.886554 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 8 19:56:52.886567 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 8 19:56:52.908588 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:56:52.932728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:52.940087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:56:52.945314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:56:52.949523 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:56:52.949604 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:56:52.965444 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:56:52.967260 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:56:52.987286 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:56:53.189280 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 19:56:53.189358 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:53.189371 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:53.190284 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:53.190313 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 19:56:53.191529 kernel: ata3.00: applying bridge limits Oct 8 19:56:53.192290 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:53.193296 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:53.193323 kernel: ata3.00: configured for UDMA/100 Oct 8 19:56:53.194419 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:56:53.244688 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 19:56:53.245066 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:56:53.258473 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:56:53.284903 disk-uuid[556]: Primary Header is updated. Oct 8 19:56:53.284903 disk-uuid[556]: Secondary Entries is updated. Oct 8 19:56:53.284903 disk-uuid[556]: Secondary Header is updated. Oct 8 19:56:53.289284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:56:53.294286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:56:54.332314 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:56:54.333274 disk-uuid[578]: The operation has completed successfully. Oct 8 19:56:54.358703 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:56:54.358835 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:56:54.388527 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:56:54.394037 sh[593]: Success Oct 8 19:56:54.407285 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 19:56:54.443162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:56:54.456088 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:56:54.469806 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:56:54.480792 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 19:56:54.480844 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:56:54.480856 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:56:54.481832 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:56:54.482616 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:56:54.504252 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:56:54.506733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:56:54.530580 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:56:54.533340 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:56:54.544719 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:54.544778 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:56:54.544790 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:56:54.548294 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:56:54.558208 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:56:54.559988 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:54.649979 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:56:54.712546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:56:54.735374 systemd-networkd[771]: lo: Link UP Oct 8 19:56:54.735384 systemd-networkd[771]: lo: Gained carrier Oct 8 19:56:54.736964 systemd-networkd[771]: Enumeration completed Oct 8 19:56:54.737050 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:56:54.737349 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:56:54.737353 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:56:54.777948 systemd-networkd[771]: eth0: Link UP Oct 8 19:56:54.777952 systemd-networkd[771]: eth0: Gained carrier Oct 8 19:56:54.777960 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:56:54.779696 systemd[1]: Reached target network.target - Network. Oct 8 19:56:54.890385 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:56:54.950540 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:56:54.969627 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:56:55.029092 ignition[776]: Ignition 2.19.0 Oct 8 19:56:55.029107 ignition[776]: Stage: fetch-offline Oct 8 19:56:55.029154 ignition[776]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:55.029168 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:55.029300 ignition[776]: parsed url from cmdline: "" Oct 8 19:56:55.029305 ignition[776]: no config URL provided Oct 8 19:56:55.029312 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:56:55.029325 ignition[776]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:56:55.029359 ignition[776]: op(1): [started] loading QEMU firmware config module Oct 8 19:56:55.029366 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:56:55.039123 ignition[776]: op(1): [finished] loading QEMU firmware config module Oct 8 19:56:55.039176 ignition[776]: QEMU firmware config was not found. Ignoring... Oct 8 19:56:55.085542 ignition[776]: parsing config with SHA512: 7b06a81c19fe2451cab023f37a9bdfca54d0193d9310499d04a193161616d704100143f6f96e0ca1af25212fdb8b3cdd20cfc4db1245ba808ee2414c6cc5af98 Oct 8 19:56:55.090867 unknown[776]: fetched base config from "system" Oct 8 19:56:55.090895 unknown[776]: fetched user config from "qemu" Oct 8 19:56:55.092810 ignition[776]: fetch-offline: fetch-offline passed Oct 8 19:56:55.092928 ignition[776]: Ignition finished successfully Oct 8 19:56:55.097289 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:56:55.099008 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:56:55.109432 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:56:55.124759 ignition[786]: Ignition 2.19.0 Oct 8 19:56:55.124769 ignition[786]: Stage: kargs Oct 8 19:56:55.124938 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:55.124948 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:55.125861 ignition[786]: kargs: kargs passed Oct 8 19:56:55.129966 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:56:55.125904 ignition[786]: Ignition finished successfully Oct 8 19:56:55.141450 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:56:55.155045 ignition[793]: Ignition 2.19.0 Oct 8 19:56:55.155056 ignition[793]: Stage: disks Oct 8 19:56:55.155210 ignition[793]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:55.155222 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:55.158315 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:56:55.156313 ignition[793]: disks: disks passed Oct 8 19:56:55.161227 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:56:55.156371 ignition[793]: Ignition finished successfully Oct 8 19:56:55.163320 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:56:55.165549 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:56:55.177174 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:56:55.179493 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:56:55.244638 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:56:55.259523 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:56:55.505915 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:56:55.553389 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:56:55.661300 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 19:56:55.662170 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:56:55.663691 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:56:55.680429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:56:55.682476 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:56:55.683812 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:56:55.683865 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:56:55.692860 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Oct 8 19:56:55.692895 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:55.683896 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:56:55.703420 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:56:55.703441 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:56:55.703452 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:56:55.692995 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:56:55.704926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:56:55.718412 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:56:55.754495 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:56:55.775139 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:56:55.780178 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:56:55.785696 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:56:55.886510 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:56:55.895128 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:56:55.897299 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:56:55.906335 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:56:55.908117 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:55.926569 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:56:56.092173 ignition[929]: INFO : Ignition 2.19.0 Oct 8 19:56:56.092173 ignition[929]: INFO : Stage: mount Oct 8 19:56:56.103604 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:56.103604 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:56.103604 ignition[929]: INFO : mount: mount passed Oct 8 19:56:56.103604 ignition[929]: INFO : Ignition finished successfully Oct 8 19:56:56.109153 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:56:56.116383 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:56:56.141401 systemd-networkd[771]: eth0: Gained IPv6LL Oct 8 19:56:56.671457 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:56:56.686295 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Oct 8 19:56:56.688417 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:56.688439 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:56:56.688450 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:56:56.691298 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:56:56.693095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:56:56.735908 ignition[955]: INFO : Ignition 2.19.0 Oct 8 19:56:56.735908 ignition[955]: INFO : Stage: files Oct 8 19:56:56.738156 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:56.738156 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:56.738156 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:56:56.742742 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:56:56.742742 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:56:56.742742 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:56:56.742742 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:56:56.742742 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:56:56.742518 unknown[955]: wrote ssh authorized keys file for user: core Oct 8 19:56:56.751601 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:56:56.751601 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:56:56.751601 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:56:56.751601 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 19:56:56.798575 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:56:57.033374 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:56:57.033374 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:56:57.038034 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 8 19:56:57.411551 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 8 19:56:57.577797 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:56:57.577797 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:56:57.581920 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 8 19:56:57.888830 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 8 19:56:58.445863 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 19:56:58.445863 ignition[955]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 8 19:56:58.632992 ignition[955]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:56:58.636176 ignition[955]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:56:58.636176 ignition[955]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 8 19:56:58.636176 ignition[955]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 8 19:56:58.641061 ignition[955]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:56:58.642959 ignition[955]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:56:58.642959 ignition[955]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 8 19:56:58.642959 ignition[955]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 8 19:56:58.642959 ignition[955]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:56:58.680732 ignition[955]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:56:58.680732 ignition[955]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 8 19:56:58.680732 ignition[955]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:56:58.712125 ignition[955]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:56:58.739177 ignition[955]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:56:58.743614 ignition[955]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:56:58.743614 ignition[955]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:56:58.746499 ignition[955]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:56:58.747990 ignition[955]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:56:58.749936 ignition[955]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:56:58.751734 ignition[955]: INFO : files: files passed Oct 8 19:56:58.751734 ignition[955]: INFO : Ignition finished successfully Oct 8 19:56:58.755429 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:56:58.763600 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:56:58.766160 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:56:58.767995 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:56:58.768119 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:56:58.777751 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:56:58.780967 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:56:58.782975 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:56:58.786270 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:56:58.784517 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:56:58.786468 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:56:58.800501 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:56:58.836111 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:56:58.836327 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:56:58.838933 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:56:58.841035 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:56:58.843238 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:56:58.856692 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:56:58.872052 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:56:58.889609 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:56:58.901958 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:56:58.903550 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:56:58.906245 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:56:58.908578 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:56:58.908749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:56:58.911615 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:56:58.913596 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:56:58.916084 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:56:58.918655 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:56:58.921245 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:56:58.923938 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:56:58.926408 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:56:58.928738 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:56:58.930991 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:56:58.933492 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:56:58.935431 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:56:58.935611 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:56:58.938184 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:56:58.939957 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:56:58.942418 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:56:58.942589 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:56:58.944720 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:56:58.944877 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:56:58.947333 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:56:58.947489 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:56:58.949302 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:56:58.951208 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:56:58.955337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:56:58.957431 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:56:58.959399 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:56:58.961228 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:56:58.961381 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:56:58.963386 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:56:58.963507 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:56:58.966567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:56:58.966747 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:56:58.969162 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:56:58.969324 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:56:58.985656 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:56:58.988738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:56:58.989687 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:56:58.989838 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:56:58.992064 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:56:58.992296 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:56:58.998828 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:56:58.998965 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:56:59.004400 ignition[1010]: INFO : Ignition 2.19.0 Oct 8 19:56:59.004400 ignition[1010]: INFO : Stage: umount Oct 8 19:56:59.006397 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:59.006397 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:59.006397 ignition[1010]: INFO : umount: umount passed Oct 8 19:56:59.006397 ignition[1010]: INFO : Ignition finished successfully Oct 8 19:56:59.007448 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:56:59.007584 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:56:59.010245 systemd[1]: Stopped target network.target - Network. Oct 8 19:56:59.012117 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:56:59.012185 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:56:59.014479 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:56:59.014529 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:56:59.016860 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:56:59.016911 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:56:59.018816 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:56:59.018870 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:56:59.021112 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:56:59.023080 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:56:59.026235 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:56:59.026807 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:56:59.026916 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:56:59.028876 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:56:59.028962 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:56:59.030609 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:56:59.030744 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:56:59.031370 systemd-networkd[771]: eth0: DHCPv6 lease lost Oct 8 19:56:59.033574 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:56:59.033721 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:56:59.036754 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:56:59.036817 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:56:59.051575 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:56:59.052750 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:56:59.052838 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:56:59.055094 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:56:59.055176 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:56:59.057475 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:56:59.057538 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:56:59.060219 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:56:59.060300 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:56:59.062521 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:56:59.075587 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:56:59.075749 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:56:59.078413 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:56:59.078595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:56:59.081415 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:56:59.081495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:56:59.083394 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:56:59.083447 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:56:59.086156 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:56:59.086229 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:56:59.088771 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:56:59.088831 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:56:59.091171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:56:59.091226 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:56:59.104477 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:56:59.106912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:56:59.107005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:56:59.109426 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 19:56:59.109488 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:56:59.112214 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:56:59.112291 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:56:59.114775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:56:59.114842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:59.118109 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:56:59.118251 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:56:59.120654 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:56:59.133605 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:56:59.143022 systemd[1]: Switching root. Oct 8 19:56:59.183848 systemd-journald[193]: Journal stopped Oct 8 19:57:00.550712 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 8 19:57:00.550794 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:57:00.550819 kernel: SELinux: policy capability open_perms=1 Oct 8 19:57:00.550834 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:57:00.550856 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:57:00.550871 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:57:00.550885 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:57:00.550908 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:57:00.550923 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:57:00.550937 kernel: audit: type=1403 audit(1728417419.774:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:57:00.550954 systemd[1]: Successfully loaded SELinux policy in 40.603ms. Oct 8 19:57:00.550992 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.295ms. Oct 8 19:57:00.551011 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:57:00.551027 systemd[1]: Detected virtualization kvm. Oct 8 19:57:00.551042 systemd[1]: Detected architecture x86-64. Oct 8 19:57:00.551058 systemd[1]: Detected first boot. Oct 8 19:57:00.551073 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:57:00.551088 zram_generator::config[1072]: No configuration found. Oct 8 19:57:00.551104 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:57:00.551120 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:57:00.551135 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:57:00.551155 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:57:00.551170 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:57:00.551185 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:57:00.551199 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:57:00.551215 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:57:00.551230 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:57:00.551246 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:57:00.551427 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:57:00.551449 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:57:00.551469 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:57:00.551485 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:57:00.551500 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:57:00.551515 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:57:00.551531 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:57:00.551547 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:57:00.551702 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:57:00.551755 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:57:00.551808 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:57:00.551848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:57:00.551880 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:57:00.551918 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:57:00.551935 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:57:00.551951 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:57:00.551966 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:57:00.551981 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:57:00.551999 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:57:00.552015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:57:00.552030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:57:00.552046 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:57:00.552061 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:57:00.552075 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:57:00.552090 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:57:00.552105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:00.552121 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:57:00.552139 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:57:00.552153 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:57:00.552168 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:57:00.552195 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:00.552209 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:57:00.552224 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:57:00.552239 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:00.552254 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:57:00.552283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:00.552307 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:57:00.552329 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:00.552345 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:57:00.552360 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 8 19:57:00.552377 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 8 19:57:00.552394 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:57:00.552410 kernel: fuse: init (API version 7.39) Oct 8 19:57:00.552425 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:57:00.552445 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:57:00.552460 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:57:00.552475 kernel: loop: module loaded Oct 8 19:57:00.552490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:57:00.552506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:00.552521 kernel: ACPI: bus type drm_connector registered Oct 8 19:57:00.552536 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:57:00.552598 systemd-journald[1157]: Collecting audit messages is disabled. Oct 8 19:57:00.552631 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:57:00.552648 systemd-journald[1157]: Journal started Oct 8 19:57:00.552682 systemd-journald[1157]: Runtime Journal (/run/log/journal/5b5ebba9ab3d426f9e1ce6b7403069ea) is 6.0M, max 48.4M, 42.3M free. Oct 8 19:57:00.555328 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:57:00.557294 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:57:00.559715 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:57:00.561029 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:57:00.562289 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:57:00.563643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:57:00.565249 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:57:00.566709 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:57:00.566917 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:57:00.568734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:00.568949 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:00.570455 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:57:00.570699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:57:00.572071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:00.572319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:00.573912 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:57:00.574137 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:57:00.575594 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:00.575881 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:00.577498 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:57:00.579145 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:57:00.580828 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:57:00.594339 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:57:00.606408 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:57:00.609251 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:57:00.610798 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:57:00.613885 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:57:00.619412 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:57:00.620888 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:57:00.622458 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:57:00.623824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:57:00.628503 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:57:00.638195 systemd-journald[1157]: Time spent on flushing to /var/log/journal/5b5ebba9ab3d426f9e1ce6b7403069ea is 26.190ms for 945 entries. Oct 8 19:57:00.638195 systemd-journald[1157]: System Journal (/var/log/journal/5b5ebba9ab3d426f9e1ce6b7403069ea) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:57:00.705928 systemd-journald[1157]: Received client request to flush runtime journal. Oct 8 19:57:00.641452 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:57:00.650739 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:57:00.656447 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:57:00.701551 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:57:00.709013 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:57:00.721603 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:57:00.726990 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:57:00.729514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:57:00.729623 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Oct 8 19:57:00.729645 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Oct 8 19:57:00.735504 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:57:00.737587 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:57:00.740374 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:57:00.747816 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:57:00.777098 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:57:00.793472 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:57:00.816923 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Oct 8 19:57:00.816949 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Oct 8 19:57:00.825658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:57:01.474396 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:57:01.488530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:57:01.519046 systemd-udevd[1242]: Using default interface naming scheme 'v255'. Oct 8 19:57:01.538884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:57:01.549555 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:57:01.563505 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:57:01.586296 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1249) Oct 8 19:57:01.594283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1256) Oct 8 19:57:01.607296 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1249) Oct 8 19:57:01.614480 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 8 19:57:01.636678 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:57:01.673683 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 8 19:57:01.679289 kernel: ACPI: button: Power Button [PWRF] Oct 8 19:57:01.691747 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:57:01.703839 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 19:57:01.704157 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 19:57:01.704399 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 19:57:01.705442 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 19:57:01.778074 systemd-networkd[1246]: lo: Link UP Oct 8 19:57:01.778089 systemd-networkd[1246]: lo: Gained carrier Oct 8 19:57:01.779904 systemd-networkd[1246]: Enumeration completed Oct 8 19:57:01.780339 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:01.780351 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:57:01.781098 systemd-networkd[1246]: eth0: Link UP Oct 8 19:57:01.781229 systemd-networkd[1246]: eth0: Gained carrier Oct 8 19:57:01.781241 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:57:01.782473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:57:01.783843 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:57:01.788286 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:57:01.793522 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:57:01.793736 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:57:01.936285 kernel: kvm_amd: TSC scaling supported Oct 8 19:57:01.936416 kernel: kvm_amd: Nested Virtualization enabled Oct 8 19:57:01.936438 kernel: kvm_amd: Nested Paging enabled Oct 8 19:57:01.936458 kernel: kvm_amd: LBR virtualization supported Oct 8 19:57:01.936478 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 8 19:57:01.936497 kernel: kvm_amd: Virtual GIF supported Oct 8 19:57:01.957620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:57:01.962275 kernel: EDAC MC: Ver: 3.0.0 Oct 8 19:57:02.000832 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:57:02.013461 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:57:02.023128 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:57:02.059210 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:57:02.060938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:57:02.074647 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:57:02.080686 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:57:02.166256 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:57:02.168041 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:57:02.169458 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:57:02.169524 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:57:02.170704 systemd[1]: Reached target machines.target - Containers. Oct 8 19:57:02.173444 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:57:02.185419 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:57:02.188058 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:57:02.189242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:02.190189 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:57:02.194757 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:57:02.198285 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:57:02.200794 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:57:02.214876 kernel: loop0: detected capacity change from 0 to 140768 Oct 8 19:57:02.215926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:57:02.226333 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:57:02.227549 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:57:02.240289 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:57:02.262317 kernel: loop1: detected capacity change from 0 to 211296 Oct 8 19:57:02.299297 kernel: loop2: detected capacity change from 0 to 142488 Oct 8 19:57:02.340311 kernel: loop3: detected capacity change from 0 to 140768 Oct 8 19:57:02.354305 kernel: loop4: detected capacity change from 0 to 211296 Oct 8 19:57:02.363290 kernel: loop5: detected capacity change from 0 to 142488 Oct 8 19:57:02.373715 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:57:02.374589 (sd-merge)[1315]: Merged extensions into '/usr'. Oct 8 19:57:02.412636 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:57:02.412654 systemd[1]: Reloading... Oct 8 19:57:02.508295 zram_generator::config[1344]: No configuration found. Oct 8 19:57:02.572412 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:57:02.642250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:02.714524 systemd[1]: Reloading finished in 301 ms. Oct 8 19:57:02.734357 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:57:02.736101 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:57:02.747580 systemd[1]: Starting ensure-sysext.service... Oct 8 19:57:02.750232 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:57:02.757500 systemd[1]: Reloading requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:57:02.757525 systemd[1]: Reloading... Oct 8 19:57:02.781851 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:57:02.782283 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:57:02.783351 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:57:02.783660 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Oct 8 19:57:02.783743 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Oct 8 19:57:02.787051 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:57:02.787065 systemd-tmpfiles[1395]: Skipping /boot Oct 8 19:57:02.800379 systemd-networkd[1246]: eth0: Gained IPv6LL Oct 8 19:57:02.804201 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:57:02.804215 systemd-tmpfiles[1395]: Skipping /boot Oct 8 19:57:02.882908 zram_generator::config[1425]: No configuration found. Oct 8 19:57:03.009319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:03.079647 systemd[1]: Reloading finished in 321 ms. Oct 8 19:57:03.101942 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:57:03.115146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:57:03.125871 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:03.129568 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:57:03.132750 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:57:03.137142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:57:03.142326 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:57:03.148400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:03.148605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:03.151310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:03.157128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:03.165125 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:03.168449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:03.168618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:03.170099 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:03.170447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:03.172648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:03.172881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:03.183706 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:57:03.186894 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:03.190816 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:03.193055 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:57:03.199544 augenrules[1507]: No rules Oct 8 19:57:03.200796 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:03.205781 systemd[1]: Finished ensure-sysext.service. Oct 8 19:57:03.207945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:03.208224 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:57:03.215537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:57:03.218425 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:57:03.223438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:57:03.226547 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:57:03.228073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:57:03.235721 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:57:03.241466 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:57:03.242758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:57:03.243953 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:57:03.246018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:57:03.246334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:57:03.246610 systemd-resolved[1475]: Positive Trust Anchors: Oct 8 19:57:03.246622 systemd-resolved[1475]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:57:03.246660 systemd-resolved[1475]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:57:03.248634 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:57:03.248893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:57:03.250561 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:57:03.250793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:57:03.251558 systemd-resolved[1475]: Defaulting to hostname 'linux'. Oct 8 19:57:03.252535 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:57:03.252776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:57:03.254134 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:57:03.261038 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:57:03.263970 systemd[1]: Reached target network.target - Network. Oct 8 19:57:03.265348 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:57:03.266587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:57:03.268020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:57:03.268114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:57:03.268147 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:57:03.336536 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:57:03.338365 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:57:03.338656 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:57:03.338706 systemd-timesyncd[1525]: Initial clock synchronization to Tue 2024-10-08 19:57:03.712509 UTC. Oct 8 19:57:03.339792 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:57:03.341140 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:57:03.342516 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:57:03.343825 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:57:03.343870 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:57:03.344807 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:57:03.346229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:57:03.347571 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:57:03.348828 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:57:03.350433 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:57:03.353808 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:57:03.356330 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:57:03.366764 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:57:03.367944 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:57:03.368962 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:57:03.370183 systemd[1]: System is tainted: cgroupsv1 Oct 8 19:57:03.370219 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:57:03.370241 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:57:03.371520 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:57:03.373924 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:57:03.376282 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:57:03.381257 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:57:03.386578 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:57:03.387835 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:57:03.389370 jq[1544]: false Oct 8 19:57:03.391375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:03.396813 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:57:03.402439 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:57:03.406338 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:57:03.411962 extend-filesystems[1547]: Found loop3 Oct 8 19:57:03.413587 extend-filesystems[1547]: Found loop4 Oct 8 19:57:03.413587 extend-filesystems[1547]: Found loop5 Oct 8 19:57:03.413587 extend-filesystems[1547]: Found sr0 Oct 8 19:57:03.413587 extend-filesystems[1547]: Found vda Oct 8 19:57:03.413587 extend-filesystems[1547]: Found vda1 Oct 8 19:57:03.413587 extend-filesystems[1547]: Found vda2 Oct 8 19:57:03.413587 extend-filesystems[1547]: Found vda3 Oct 8 19:57:03.413587 extend-filesystems[1547]: Found usr Oct 8 19:57:03.413587 extend-filesystems[1547]: Found vda4 Oct 8 19:57:03.413517 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:57:03.413038 dbus-daemon[1543]: [system] SELinux support is enabled Oct 8 19:57:03.426536 extend-filesystems[1547]: Found vda6 Oct 8 19:57:03.426536 extend-filesystems[1547]: Found vda7 Oct 8 19:57:03.426536 extend-filesystems[1547]: Found vda9 Oct 8 19:57:03.426536 extend-filesystems[1547]: Checking size of /dev/vda9 Oct 8 19:57:03.428841 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:57:03.446524 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:57:03.449762 extend-filesystems[1547]: Resized partition /dev/vda9 Oct 8 19:57:03.451662 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:57:03.456467 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:57:03.458895 extend-filesystems[1576]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:57:03.463361 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1256) Oct 8 19:57:03.463396 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:57:03.465221 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:57:03.467990 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:57:03.483485 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:57:03.483836 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:57:03.486749 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:57:03.487076 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:57:03.489362 jq[1581]: true Oct 8 19:57:03.489947 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:57:03.497166 update_engine[1578]: I20241008 19:57:03.497007 1578 main.cc:92] Flatcar Update Engine starting Oct 8 19:57:03.497683 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:57:03.498061 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:57:03.505275 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:57:03.511928 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:57:03.545659 update_engine[1578]: I20241008 19:57:03.500403 1578 update_check_scheduler.cc:74] Next update check in 5m22s Oct 8 19:57:03.519572 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:57:03.521815 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:57:03.546108 jq[1591]: true Oct 8 19:57:03.550657 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:57:03.550657 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:57:03.550657 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:57:03.554513 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Oct 8 19:57:03.550983 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:57:03.551342 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:57:03.553362 systemd-logind[1573]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 19:57:03.553382 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 19:57:03.553615 systemd-logind[1573]: New seat seat0. Oct 8 19:57:03.558234 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:57:03.569059 tar[1587]: linux-amd64/helm Oct 8 19:57:03.574631 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:57:03.578190 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:57:03.580153 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:57:03.580358 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:57:03.583720 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:57:03.583886 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:57:03.586443 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:57:03.594937 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:57:03.617337 bash[1627]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:57:03.615803 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:57:03.628915 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:57:03.666552 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:57:03.811078 containerd[1592]: time="2024-10-08T19:57:03.810937252Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:57:03.840370 containerd[1592]: time="2024-10-08T19:57:03.839105587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:03.840499 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:57:03.841232 containerd[1592]: time="2024-10-08T19:57:03.841184506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:03.841289 containerd[1592]: time="2024-10-08T19:57:03.841231013Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:57:03.841289 containerd[1592]: time="2024-10-08T19:57:03.841253996Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:57:03.841572 containerd[1592]: time="2024-10-08T19:57:03.841545804Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:57:03.841609 containerd[1592]: time="2024-10-08T19:57:03.841573275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:03.841668 containerd[1592]: time="2024-10-08T19:57:03.841641042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:03.841668 containerd[1592]: time="2024-10-08T19:57:03.841663825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:03.842300 containerd[1592]: time="2024-10-08T19:57:03.841943940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:03.842300 containerd[1592]: time="2024-10-08T19:57:03.841966964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:03.842300 containerd[1592]: time="2024-10-08T19:57:03.841980118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:03.842300 containerd[1592]: time="2024-10-08T19:57:03.841989836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:03.842300 containerd[1592]: time="2024-10-08T19:57:03.842097358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:03.843112 containerd[1592]: time="2024-10-08T19:57:03.842611242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:03.843112 containerd[1592]: time="2024-10-08T19:57:03.842795097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:03.843112 containerd[1592]: time="2024-10-08T19:57:03.842809694Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:57:03.843112 containerd[1592]: time="2024-10-08T19:57:03.842902889Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:57:03.843112 containerd[1592]: time="2024-10-08T19:57:03.842957321Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:57:03.853278 containerd[1592]: time="2024-10-08T19:57:03.853207874Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:57:03.853396 containerd[1592]: time="2024-10-08T19:57:03.853300368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:57:03.853396 containerd[1592]: time="2024-10-08T19:57:03.853324683Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:57:03.853396 containerd[1592]: time="2024-10-08T19:57:03.853343358Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:57:03.853396 containerd[1592]: time="2024-10-08T19:57:03.853360140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:57:03.853957 containerd[1592]: time="2024-10-08T19:57:03.853554564Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:57:03.853988 containerd[1592]: time="2024-10-08T19:57:03.853962720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:57:03.854115 containerd[1592]: time="2024-10-08T19:57:03.854091902Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:57:03.854143 containerd[1592]: time="2024-10-08T19:57:03.854114805Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:57:03.854143 containerd[1592]: time="2024-10-08T19:57:03.854127589Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:57:03.854143 containerd[1592]: time="2024-10-08T19:57:03.854141335Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:57:03.854195 containerd[1592]: time="2024-10-08T19:57:03.854154890Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:57:03.854195 containerd[1592]: time="2024-10-08T19:57:03.854166412Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:57:03.854195 containerd[1592]: time="2024-10-08T19:57:03.854178825Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:57:03.854195 containerd[1592]: time="2024-10-08T19:57:03.854191529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:57:03.854280 containerd[1592]: time="2024-10-08T19:57:03.854204844Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:57:03.854280 containerd[1592]: time="2024-10-08T19:57:03.854216636Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:57:03.854280 containerd[1592]: time="2024-10-08T19:57:03.854227336Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:57:03.854280 containerd[1592]: time="2024-10-08T19:57:03.854246402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854280 containerd[1592]: time="2024-10-08T19:57:03.854258745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854377 containerd[1592]: time="2024-10-08T19:57:03.854287188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854377 containerd[1592]: time="2024-10-08T19:57:03.854299722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854377 containerd[1592]: time="2024-10-08T19:57:03.854311424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854377 containerd[1592]: time="2024-10-08T19:57:03.854323336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854377 containerd[1592]: time="2024-10-08T19:57:03.854334878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854377 containerd[1592]: time="2024-10-08T19:57:03.854346790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854377 containerd[1592]: time="2024-10-08T19:57:03.854360836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854377 containerd[1592]: time="2024-10-08T19:57:03.854378329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854530 containerd[1592]: time="2024-10-08T19:57:03.854391444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854530 containerd[1592]: time="2024-10-08T19:57:03.854402595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854530 containerd[1592]: time="2024-10-08T19:57:03.854413385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854530 containerd[1592]: time="2024-10-08T19:57:03.854434725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:57:03.854530 containerd[1592]: time="2024-10-08T19:57:03.854454222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854530 containerd[1592]: time="2024-10-08T19:57:03.854465833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854530 containerd[1592]: time="2024-10-08T19:57:03.854475892Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:57:03.854530 containerd[1592]: time="2024-10-08T19:57:03.854530755Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:57:03.854679 containerd[1592]: time="2024-10-08T19:57:03.854548068Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:57:03.854679 containerd[1592]: time="2024-10-08T19:57:03.854559309Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:57:03.854679 containerd[1592]: time="2024-10-08T19:57:03.854570971Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:57:03.854679 containerd[1592]: time="2024-10-08T19:57:03.854579907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854679 containerd[1592]: time="2024-10-08T19:57:03.854593122Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:57:03.854679 containerd[1592]: time="2024-10-08T19:57:03.854602339Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:57:03.854679 containerd[1592]: time="2024-10-08T19:57:03.854611737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:57:03.854999 containerd[1592]: time="2024-10-08T19:57:03.854868529Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:57:03.854999 containerd[1592]: time="2024-10-08T19:57:03.854927790Z" level=info msg="Connect containerd service" Oct 8 19:57:03.854999 containerd[1592]: time="2024-10-08T19:57:03.854965280Z" level=info msg="using legacy CRI server" Oct 8 19:57:03.854999 containerd[1592]: time="2024-10-08T19:57:03.854972023Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:57:03.855185 containerd[1592]: time="2024-10-08T19:57:03.855051241Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.855764158Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.855893751Z" level=info msg="Start subscribing containerd event" Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.855933967Z" level=info msg="Start recovering state" Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.856035417Z" level=info msg="Start event monitor" Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.856062798Z" level=info msg="Start snapshots syncer" Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.856071074Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.856084008Z" level=info msg="Start streaming server" Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.856573787Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:57:03.857283 containerd[1592]: time="2024-10-08T19:57:03.856665098Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:57:03.857689 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:57:03.861223 containerd[1592]: time="2024-10-08T19:57:03.857832217Z" level=info msg="containerd successfully booted in 0.048767s" Oct 8 19:57:03.872850 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:57:03.884606 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:57:03.893877 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:57:03.894438 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:57:03.902591 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:57:03.917511 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:57:03.931537 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:57:03.934341 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:57:03.936121 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:57:04.075588 tar[1587]: linux-amd64/LICENSE Oct 8 19:57:04.075588 tar[1587]: linux-amd64/README.md Oct 8 19:57:04.092361 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:57:04.292955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:04.294934 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:57:04.296328 systemd[1]: Startup finished in 8.904s (kernel) + 4.561s (userspace) = 13.466s. Oct 8 19:57:04.325928 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:05.274883 kubelet[1681]: E1008 19:57:05.274766 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:05.280017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:05.280398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:08.224519 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:57:08.231556 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:45186.service - OpenSSH per-connection server daemon (10.0.0.1:45186). Oct 8 19:57:08.275987 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 45186 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:57:08.278097 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:08.287372 systemd-logind[1573]: New session 1 of user core. Oct 8 19:57:08.288487 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:57:08.300506 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:57:08.313666 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:57:08.325652 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:57:08.329485 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:57:08.456924 systemd[1701]: Queued start job for default target default.target. Oct 8 19:57:08.457357 systemd[1701]: Created slice app.slice - User Application Slice. Oct 8 19:57:08.457376 systemd[1701]: Reached target paths.target - Paths. Oct 8 19:57:08.457389 systemd[1701]: Reached target timers.target - Timers. Oct 8 19:57:08.469391 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:57:08.477407 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:57:08.477506 systemd[1701]: Reached target sockets.target - Sockets. Oct 8 19:57:08.477522 systemd[1701]: Reached target basic.target - Basic System. Oct 8 19:57:08.477596 systemd[1701]: Reached target default.target - Main User Target. Oct 8 19:57:08.477636 systemd[1701]: Startup finished in 139ms. Oct 8 19:57:08.478108 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:57:08.479604 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:57:08.538582 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:45198.service - OpenSSH per-connection server daemon (10.0.0.1:45198). Oct 8 19:57:08.575582 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 45198 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:57:08.577311 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:08.581793 systemd-logind[1573]: New session 2 of user core. Oct 8 19:57:08.593658 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:57:08.649646 sshd[1714]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:08.657576 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:45200.service - OpenSSH per-connection server daemon (10.0.0.1:45200). Oct 8 19:57:08.658125 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:45198.service: Deactivated successfully. Oct 8 19:57:08.660800 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:57:08.660845 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:57:08.662693 systemd-logind[1573]: Removed session 2. Oct 8 19:57:08.688432 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 45200 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:57:08.689893 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:08.694455 systemd-logind[1573]: New session 3 of user core. Oct 8 19:57:08.704579 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:57:08.756283 sshd[1719]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:08.766526 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:45206.service - OpenSSH per-connection server daemon (10.0.0.1:45206). Oct 8 19:57:08.767008 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:45200.service: Deactivated successfully. Oct 8 19:57:08.769488 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:57:08.769653 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:57:08.771444 systemd-logind[1573]: Removed session 3. Oct 8 19:57:08.796891 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 45206 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:57:08.798275 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:08.802603 systemd-logind[1573]: New session 4 of user core. Oct 8 19:57:08.812598 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:57:08.867273 sshd[1727]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:08.878555 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:45216.service - OpenSSH per-connection server daemon (10.0.0.1:45216). Oct 8 19:57:08.879357 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:45206.service: Deactivated successfully. Oct 8 19:57:08.881188 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:57:08.881890 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:57:08.883124 systemd-logind[1573]: Removed session 4. Oct 8 19:57:08.908774 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 45216 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:57:08.910335 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:08.914595 systemd-logind[1573]: New session 5 of user core. Oct 8 19:57:08.925574 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:57:08.985059 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:57:08.985418 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:09.008228 sudo[1742]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:09.010017 sshd[1736]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:09.023551 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:45220.service - OpenSSH per-connection server daemon (10.0.0.1:45220). Oct 8 19:57:09.024023 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:45216.service: Deactivated successfully. Oct 8 19:57:09.026364 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:57:09.027052 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:57:09.028578 systemd-logind[1573]: Removed session 5. Oct 8 19:57:09.055391 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 45220 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:57:09.057095 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:09.060899 systemd-logind[1573]: New session 6 of user core. Oct 8 19:57:09.072539 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:57:09.128310 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:57:09.128654 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:09.132139 sudo[1752]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:09.138931 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:57:09.139333 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:09.161569 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:09.163430 auditctl[1755]: No rules Oct 8 19:57:09.164752 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:57:09.165107 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:09.167240 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:09.201635 augenrules[1774]: No rules Oct 8 19:57:09.203712 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:09.205181 sudo[1751]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:09.207858 sshd[1744]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:09.216652 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:45224.service - OpenSSH per-connection server daemon (10.0.0.1:45224). Oct 8 19:57:09.217186 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:45220.service: Deactivated successfully. Oct 8 19:57:09.219413 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:57:09.221469 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:57:09.222155 systemd-logind[1573]: Removed session 6. Oct 8 19:57:09.249878 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 45224 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:57:09.251432 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:09.255537 systemd-logind[1573]: New session 7 of user core. Oct 8 19:57:09.265563 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:57:09.322840 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:57:09.323429 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:09.921524 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:57:09.921750 (dockerd)[1805]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:57:10.655904 dockerd[1805]: time="2024-10-08T19:57:10.655816756Z" level=info msg="Starting up" Oct 8 19:57:12.252195 dockerd[1805]: time="2024-10-08T19:57:12.252128744Z" level=info msg="Loading containers: start." Oct 8 19:57:12.413300 kernel: Initializing XFRM netlink socket Oct 8 19:57:12.501973 systemd-networkd[1246]: docker0: Link UP Oct 8 19:57:12.566189 dockerd[1805]: time="2024-10-08T19:57:12.566051780Z" level=info msg="Loading containers: done." Oct 8 19:57:12.616517 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck452819421-merged.mount: Deactivated successfully. Oct 8 19:57:12.697554 dockerd[1805]: time="2024-10-08T19:57:12.697447523Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:57:12.697744 dockerd[1805]: time="2024-10-08T19:57:12.697585345Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:57:12.697771 dockerd[1805]: time="2024-10-08T19:57:12.697748638Z" level=info msg="Daemon has completed initialization" Oct 8 19:57:12.803620 dockerd[1805]: time="2024-10-08T19:57:12.803501096Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:57:12.803772 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:57:13.619813 containerd[1592]: time="2024-10-08T19:57:13.619534583Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:57:14.298363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684359670.mount: Deactivated successfully. Oct 8 19:57:15.530618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:57:15.559608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:15.728680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:15.734851 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:15.931212 kubelet[2012]: E1008 19:57:15.931023 2012 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:15.939103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:15.939571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:16.859983 containerd[1592]: time="2024-10-08T19:57:16.859612201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:16.860785 containerd[1592]: time="2024-10-08T19:57:16.860698621Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 8 19:57:16.861925 containerd[1592]: time="2024-10-08T19:57:16.861881472Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:16.865527 containerd[1592]: time="2024-10-08T19:57:16.865478058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:16.866639 containerd[1592]: time="2024-10-08T19:57:16.866592422Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 3.247012042s" Oct 8 19:57:16.866639 containerd[1592]: time="2024-10-08T19:57:16.866635858Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 8 19:57:16.908740 containerd[1592]: time="2024-10-08T19:57:16.908701082Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:57:19.899448 containerd[1592]: time="2024-10-08T19:57:19.899368291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:19.900987 containerd[1592]: time="2024-10-08T19:57:19.900938276Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 8 19:57:19.902423 containerd[1592]: time="2024-10-08T19:57:19.902379374Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:19.906065 containerd[1592]: time="2024-10-08T19:57:19.905991142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:19.907498 containerd[1592]: time="2024-10-08T19:57:19.907383845Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.998638676s" Oct 8 19:57:19.907546 containerd[1592]: time="2024-10-08T19:57:19.907501694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 8 19:57:19.951283 containerd[1592]: time="2024-10-08T19:57:19.951221122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:57:22.242749 containerd[1592]: time="2024-10-08T19:57:22.242638237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:22.244362 containerd[1592]: time="2024-10-08T19:57:22.244321534Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 8 19:57:22.245968 containerd[1592]: time="2024-10-08T19:57:22.245914582Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:22.249504 containerd[1592]: time="2024-10-08T19:57:22.249433438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:22.250914 containerd[1592]: time="2024-10-08T19:57:22.250867842Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 2.299607576s" Oct 8 19:57:22.250984 containerd[1592]: time="2024-10-08T19:57:22.250912594Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 8 19:57:22.297286 containerd[1592]: time="2024-10-08T19:57:22.297158786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:57:25.084544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069770290.mount: Deactivated successfully. Oct 8 19:57:26.190174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:57:26.207566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:26.352878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:26.358054 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:26.502885 kubelet[2085]: E1008 19:57:26.502671 2085 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:26.507339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:26.507645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:26.764356 containerd[1592]: time="2024-10-08T19:57:26.764150880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:26.765855 containerd[1592]: time="2024-10-08T19:57:26.765796425Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 8 19:57:26.766957 containerd[1592]: time="2024-10-08T19:57:26.766912258Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:26.769239 containerd[1592]: time="2024-10-08T19:57:26.769198674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:26.769849 containerd[1592]: time="2024-10-08T19:57:26.769791508Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 4.47257687s" Oct 8 19:57:26.769849 containerd[1592]: time="2024-10-08T19:57:26.769838572Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 8 19:57:26.800829 containerd[1592]: time="2024-10-08T19:57:26.800764110Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:57:27.562421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196217824.mount: Deactivated successfully. Oct 8 19:57:29.143506 containerd[1592]: time="2024-10-08T19:57:29.143411627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:29.176461 containerd[1592]: time="2024-10-08T19:57:29.176361640Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 19:57:29.203771 containerd[1592]: time="2024-10-08T19:57:29.203673786Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:29.237669 containerd[1592]: time="2024-10-08T19:57:29.237623235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:29.239247 containerd[1592]: time="2024-10-08T19:57:29.239200321Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.438380506s" Oct 8 19:57:29.239247 containerd[1592]: time="2024-10-08T19:57:29.239234080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 19:57:29.264223 containerd[1592]: time="2024-10-08T19:57:29.264174116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:57:32.227895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232201906.mount: Deactivated successfully. Oct 8 19:57:32.239612 containerd[1592]: time="2024-10-08T19:57:32.239541130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:32.240721 containerd[1592]: time="2024-10-08T19:57:32.240643168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 8 19:57:32.242206 containerd[1592]: time="2024-10-08T19:57:32.242120365Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:32.261449 containerd[1592]: time="2024-10-08T19:57:32.261364673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:32.262610 containerd[1592]: time="2024-10-08T19:57:32.262561052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.998351669s" Oct 8 19:57:32.262675 containerd[1592]: time="2024-10-08T19:57:32.262614311Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 19:57:32.288039 containerd[1592]: time="2024-10-08T19:57:32.287986893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:57:35.296693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363680269.mount: Deactivated successfully. Oct 8 19:57:36.757973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:57:36.770797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:36.945499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:36.949123 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:38.269415 kubelet[2219]: E1008 19:57:38.269342 2219 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:38.273942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:38.274287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:39.392164 containerd[1592]: time="2024-10-08T19:57:39.392087913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:39.393250 containerd[1592]: time="2024-10-08T19:57:39.393177421Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 8 19:57:39.395026 containerd[1592]: time="2024-10-08T19:57:39.394970494Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:39.398409 containerd[1592]: time="2024-10-08T19:57:39.398338644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:39.399868 containerd[1592]: time="2024-10-08T19:57:39.399806428Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 7.111772878s" Oct 8 19:57:39.399868 containerd[1592]: time="2024-10-08T19:57:39.399855793Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 8 19:57:42.244935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:42.257641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:42.280092 systemd[1]: Reloading requested from client PID 2314 ('systemctl') (unit session-7.scope)... Oct 8 19:57:42.280109 systemd[1]: Reloading... Oct 8 19:57:42.366306 zram_generator::config[2356]: No configuration found. Oct 8 19:57:43.063605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:43.138275 systemd[1]: Reloading finished in 857 ms. Oct 8 19:57:43.189297 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:57:43.189437 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:57:43.189885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:43.191786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:43.356106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:43.362820 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:57:43.449555 kubelet[2414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:57:43.449555 kubelet[2414]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:57:43.449555 kubelet[2414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:57:43.449555 kubelet[2414]: I1008 19:57:43.448494 2414 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:57:43.735303 kubelet[2414]: I1008 19:57:43.735128 2414 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:57:43.735303 kubelet[2414]: I1008 19:57:43.735196 2414 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:57:43.735596 kubelet[2414]: I1008 19:57:43.735483 2414 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:57:43.758883 kubelet[2414]: E1008 19:57:43.758820 2414 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.761566 kubelet[2414]: I1008 19:57:43.761518 2414 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:57:43.778416 kubelet[2414]: I1008 19:57:43.778359 2414 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:57:43.780308 kubelet[2414]: I1008 19:57:43.780247 2414 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:57:43.780480 kubelet[2414]: I1008 19:57:43.780445 2414 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:57:43.780480 kubelet[2414]: I1008 19:57:43.780482 2414 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:57:43.780635 kubelet[2414]: I1008 19:57:43.780492 2414 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:57:43.780679 kubelet[2414]: I1008 19:57:43.780638 2414 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:57:43.780774 kubelet[2414]: I1008 19:57:43.780743 2414 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:57:43.780774 kubelet[2414]: I1008 19:57:43.780769 2414 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:57:43.780834 kubelet[2414]: I1008 19:57:43.780800 2414 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:57:43.780834 kubelet[2414]: I1008 19:57:43.780818 2414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:57:43.781441 kubelet[2414]: W1008 19:57:43.781395 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.781506 kubelet[2414]: E1008 19:57:43.781464 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.781668 kubelet[2414]: W1008 19:57:43.781599 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.781668 kubelet[2414]: E1008 19:57:43.781671 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.782384 kubelet[2414]: I1008 19:57:43.782371 2414 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:57:43.786290 kubelet[2414]: I1008 19:57:43.785966 2414 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:57:43.787412 kubelet[2414]: W1008 19:57:43.787391 2414 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:57:43.807273 kubelet[2414]: I1008 19:57:43.807226 2414 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:57:43.808566 kubelet[2414]: I1008 19:57:43.808548 2414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:57:43.808716 kubelet[2414]: I1008 19:57:43.808680 2414 server.go:1256] "Started kubelet" Oct 8 19:57:43.808958 kubelet[2414]: I1008 19:57:43.808933 2414 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:57:43.809761 kubelet[2414]: I1008 19:57:43.809738 2414 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:57:43.812126 kubelet[2414]: I1008 19:57:43.812097 2414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:57:43.814751 kubelet[2414]: E1008 19:57:43.813889 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:43.814751 kubelet[2414]: I1008 19:57:43.813936 2414 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:57:43.814751 kubelet[2414]: I1008 19:57:43.814039 2414 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:57:43.814751 kubelet[2414]: I1008 19:57:43.814152 2414 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:57:43.814751 kubelet[2414]: E1008 19:57:43.814473 2414 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc928699ce8383 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:57:43.807148931 +0000 UTC m=+0.439264071,LastTimestamp:2024-10-08 19:57:43.807148931 +0000 UTC m=+0.439264071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:57:43.814751 kubelet[2414]: W1008 19:57:43.814479 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.814751 kubelet[2414]: E1008 19:57:43.814526 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.815752 kubelet[2414]: E1008 19:57:43.815711 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" Oct 8 19:57:43.816415 kubelet[2414]: I1008 19:57:43.816048 2414 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:57:43.816415 kubelet[2414]: I1008 19:57:43.816126 2414 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:57:43.817358 kubelet[2414]: I1008 19:57:43.817332 2414 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:57:43.817509 kubelet[2414]: E1008 19:57:43.817494 2414 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:57:43.833371 kubelet[2414]: I1008 19:57:43.833326 2414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:57:43.834837 kubelet[2414]: I1008 19:57:43.834808 2414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:57:43.834883 kubelet[2414]: I1008 19:57:43.834853 2414 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:57:43.834910 kubelet[2414]: I1008 19:57:43.834882 2414 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:57:43.834976 kubelet[2414]: E1008 19:57:43.834945 2414 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:57:43.861566 kubelet[2414]: W1008 19:57:43.861172 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.861566 kubelet[2414]: E1008 19:57:43.861456 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:43.869511 kubelet[2414]: I1008 19:57:43.869478 2414 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:57:43.869511 kubelet[2414]: I1008 19:57:43.869495 2414 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:57:43.869511 kubelet[2414]: I1008 19:57:43.869510 2414 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:57:43.916197 kubelet[2414]: I1008 19:57:43.916159 2414 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:43.916621 kubelet[2414]: E1008 19:57:43.916601 2414 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 8 19:57:43.935819 kubelet[2414]: E1008 19:57:43.935759 2414 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:57:44.017384 kubelet[2414]: E1008 19:57:44.017210 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" Oct 8 19:57:44.118488 kubelet[2414]: I1008 19:57:44.118449 2414 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:44.118775 kubelet[2414]: E1008 19:57:44.118753 2414 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 8 19:57:44.136006 kubelet[2414]: E1008 19:57:44.135929 2414 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:57:44.418490 kubelet[2414]: E1008 19:57:44.418357 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" Oct 8 19:57:44.520941 kubelet[2414]: I1008 19:57:44.520909 2414 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:44.521402 kubelet[2414]: E1008 19:57:44.521248 2414 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 8 19:57:44.536431 kubelet[2414]: E1008 19:57:44.536379 2414 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:57:44.944849 kubelet[2414]: I1008 19:57:44.944782 2414 policy_none.go:49] "None policy: Start" Oct 8 19:57:44.945876 kubelet[2414]: I1008 19:57:44.945857 2414 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:57:44.946012 kubelet[2414]: I1008 19:57:44.945888 2414 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:57:44.963743 kubelet[2414]: I1008 19:57:44.963680 2414 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:57:44.973362 kubelet[2414]: I1008 19:57:44.964093 2414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:57:45.008429 kubelet[2414]: W1008 19:57:45.008385 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:45.008720 kubelet[2414]: E1008 19:57:45.008634 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:45.009499 kubelet[2414]: E1008 19:57:45.009471 2414 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:57:45.028696 kubelet[2414]: W1008 19:57:45.028594 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:45.028696 kubelet[2414]: E1008 19:57:45.028675 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:45.131832 kubelet[2414]: W1008 19:57:45.131749 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:45.131832 kubelet[2414]: E1008 19:57:45.131814 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:45.219886 kubelet[2414]: E1008 19:57:45.219753 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="1.6s" Oct 8 19:57:45.323226 kubelet[2414]: I1008 19:57:45.323179 2414 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:45.323635 kubelet[2414]: E1008 19:57:45.323608 2414 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 8 19:57:45.336962 kubelet[2414]: I1008 19:57:45.336871 2414 topology_manager.go:215] "Topology Admit Handler" podUID="347f4080e7d500b752556e5738039e14" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:57:45.338662 kubelet[2414]: I1008 19:57:45.338549 2414 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:57:45.340303 kubelet[2414]: I1008 19:57:45.339732 2414 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:57:45.352767 kubelet[2414]: W1008 19:57:45.352696 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:45.352767 kubelet[2414]: E1008 19:57:45.352747 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:45.422403 kubelet[2414]: I1008 19:57:45.422321 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/347f4080e7d500b752556e5738039e14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"347f4080e7d500b752556e5738039e14\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:45.422403 kubelet[2414]: I1008 19:57:45.422387 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/347f4080e7d500b752556e5738039e14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"347f4080e7d500b752556e5738039e14\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:45.422403 kubelet[2414]: I1008 19:57:45.422411 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:45.422689 kubelet[2414]: I1008 19:57:45.422456 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:45.422689 kubelet[2414]: I1008 19:57:45.422519 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/347f4080e7d500b752556e5738039e14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"347f4080e7d500b752556e5738039e14\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:45.422689 kubelet[2414]: I1008 19:57:45.422561 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:45.422689 kubelet[2414]: I1008 19:57:45.422591 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:45.422689 kubelet[2414]: I1008 19:57:45.422628 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:45.422794 kubelet[2414]: I1008 19:57:45.422660 2414 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:57:45.644711 kubelet[2414]: E1008 19:57:45.644581 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:45.645589 containerd[1592]: time="2024-10-08T19:57:45.645538698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:347f4080e7d500b752556e5738039e14,Namespace:kube-system,Attempt:0,}" Oct 8 19:57:45.647780 kubelet[2414]: E1008 19:57:45.647724 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:45.648205 kubelet[2414]: E1008 19:57:45.648060 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:45.648287 containerd[1592]: time="2024-10-08T19:57:45.648059678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 19:57:45.648401 containerd[1592]: time="2024-10-08T19:57:45.648367336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 19:57:45.913225 kubelet[2414]: E1008 19:57:45.913091 2414 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:46.752403 kubelet[2414]: W1008 19:57:46.752310 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:46.752403 kubelet[2414]: E1008 19:57:46.752383 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:46.820808 kubelet[2414]: E1008 19:57:46.820761 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="3.2s" Oct 8 19:57:46.925657 kubelet[2414]: I1008 19:57:46.925612 2414 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:46.926080 kubelet[2414]: E1008 19:57:46.926046 2414 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 8 19:57:48.004094 kubelet[2414]: W1008 19:57:48.004010 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:48.004094 kubelet[2414]: E1008 19:57:48.004082 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:48.141581 kubelet[2414]: W1008 19:57:48.141480 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:48.141581 kubelet[2414]: E1008 19:57:48.141573 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:48.228374 kubelet[2414]: W1008 19:57:48.228288 2414 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:48.228374 kubelet[2414]: E1008 19:57:48.228361 2414 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 8 19:57:48.600341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166138452.mount: Deactivated successfully. Oct 8 19:57:48.606560 containerd[1592]: time="2024-10-08T19:57:48.606512755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:48.608452 containerd[1592]: time="2024-10-08T19:57:48.608389661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:57:48.609505 containerd[1592]: time="2024-10-08T19:57:48.609435127Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:48.610400 containerd[1592]: time="2024-10-08T19:57:48.610364453Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:48.611504 containerd[1592]: time="2024-10-08T19:57:48.611453874Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:48.612470 containerd[1592]: time="2024-10-08T19:57:48.612418445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:57:48.613541 containerd[1592]: time="2024-10-08T19:57:48.613505199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 19:57:48.615327 containerd[1592]: time="2024-10-08T19:57:48.615290303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:48.618461 containerd[1592]: time="2024-10-08T19:57:48.618419482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.97029395s" Oct 8 19:57:48.619823 containerd[1592]: time="2024-10-08T19:57:48.619785790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.974168075s" Oct 8 19:57:48.621146 containerd[1592]: time="2024-10-08T19:57:48.621096092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.972671072s" Oct 8 19:57:48.659213 kubelet[2414]: E1008 19:57:48.659171 2414 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc928699ce8383 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:57:43.807148931 +0000 UTC m=+0.439264071,LastTimestamp:2024-10-08 19:57:43.807148931 +0000 UTC m=+0.439264071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:57:48.761844 containerd[1592]: time="2024-10-08T19:57:48.761720520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:57:48.762382 containerd[1592]: time="2024-10-08T19:57:48.762115320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:57:48.762382 containerd[1592]: time="2024-10-08T19:57:48.762159065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:57:48.762382 containerd[1592]: time="2024-10-08T19:57:48.762176447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:48.762382 containerd[1592]: time="2024-10-08T19:57:48.762285008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:48.762382 containerd[1592]: time="2024-10-08T19:57:48.761979604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:57:48.762382 containerd[1592]: time="2024-10-08T19:57:48.762027369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:57:48.762382 containerd[1592]: time="2024-10-08T19:57:48.762039789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:48.762382 containerd[1592]: time="2024-10-08T19:57:48.762160810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:48.763093 containerd[1592]: time="2024-10-08T19:57:48.762959802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:57:48.763093 containerd[1592]: time="2024-10-08T19:57:48.763023406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:48.763291 containerd[1592]: time="2024-10-08T19:57:48.763224368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:48.834107 containerd[1592]: time="2024-10-08T19:57:48.834053256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"520365baafedc4b402847dbe27adad0049eb8809b2abe8a7bfa7f800ec82c8b6\"" Oct 8 19:57:48.835617 kubelet[2414]: E1008 19:57:48.835583 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:48.837650 containerd[1592]: time="2024-10-08T19:57:48.837616218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:347f4080e7d500b752556e5738039e14,Namespace:kube-system,Attempt:0,} returns sandbox id \"36b9141b6f3b5dd522fd1a1669833fa10ebe7c2dc25520ac957451120a651902\"" Oct 8 19:57:48.838367 kubelet[2414]: E1008 19:57:48.838351 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:48.839156 containerd[1592]: time="2024-10-08T19:57:48.839131153Z" level=info msg="CreateContainer within sandbox \"520365baafedc4b402847dbe27adad0049eb8809b2abe8a7bfa7f800ec82c8b6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:57:48.840412 containerd[1592]: time="2024-10-08T19:57:48.840378794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcd63027fba459a3b09859d986df24ee4ff15ea804286175baec43cdcf3f9f66\"" Oct 8 19:57:48.840954 kubelet[2414]: E1008 19:57:48.840935 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:48.841745 containerd[1592]: time="2024-10-08T19:57:48.841716162Z" level=info msg="CreateContainer within sandbox \"36b9141b6f3b5dd522fd1a1669833fa10ebe7c2dc25520ac957451120a651902\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:57:48.842900 containerd[1592]: time="2024-10-08T19:57:48.842875302Z" level=info msg="CreateContainer within sandbox \"fcd63027fba459a3b09859d986df24ee4ff15ea804286175baec43cdcf3f9f66\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:57:48.867152 update_engine[1578]: I20241008 19:57:48.867007 1578 update_attempter.cc:509] Updating boot flags... Oct 8 19:57:48.930340 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2587) Oct 8 19:57:48.976334 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2590) Oct 8 19:57:49.007292 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2590) Oct 8 19:57:49.060498 containerd[1592]: time="2024-10-08T19:57:49.060436389Z" level=info msg="CreateContainer within sandbox \"520365baafedc4b402847dbe27adad0049eb8809b2abe8a7bfa7f800ec82c8b6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"10df71b4a4404eebb82870185e2a354d0fd6016018ba2fc78e41cc46082c73f4\"" Oct 8 19:57:49.061353 containerd[1592]: time="2024-10-08T19:57:49.061314339Z" level=info msg="StartContainer for \"10df71b4a4404eebb82870185e2a354d0fd6016018ba2fc78e41cc46082c73f4\"" Oct 8 19:57:49.062517 containerd[1592]: time="2024-10-08T19:57:49.062489074Z" level=info msg="CreateContainer within sandbox \"fcd63027fba459a3b09859d986df24ee4ff15ea804286175baec43cdcf3f9f66\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8143391bf55fa45bb72dd1d28f5eaab5a95c6610945823396afa56b6d2f0ca83\"" Oct 8 19:57:49.064164 containerd[1592]: time="2024-10-08T19:57:49.062799152Z" level=info msg="StartContainer for \"8143391bf55fa45bb72dd1d28f5eaab5a95c6610945823396afa56b6d2f0ca83\"" Oct 8 19:57:49.067081 containerd[1592]: time="2024-10-08T19:57:49.067027545Z" level=info msg="CreateContainer within sandbox \"36b9141b6f3b5dd522fd1a1669833fa10ebe7c2dc25520ac957451120a651902\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d255b3d728ea4fa21bc77258299178bd92891d60d0f688ad51e38b8cd2cdaa94\"" Oct 8 19:57:49.067633 containerd[1592]: time="2024-10-08T19:57:49.067597753Z" level=info msg="StartContainer for \"d255b3d728ea4fa21bc77258299178bd92891d60d0f688ad51e38b8cd2cdaa94\"" Oct 8 19:57:49.278127 containerd[1592]: time="2024-10-08T19:57:49.277992537Z" level=info msg="StartContainer for \"d255b3d728ea4fa21bc77258299178bd92891d60d0f688ad51e38b8cd2cdaa94\" returns successfully" Oct 8 19:57:49.278240 containerd[1592]: time="2024-10-08T19:57:49.278007031Z" level=info msg="StartContainer for \"10df71b4a4404eebb82870185e2a354d0fd6016018ba2fc78e41cc46082c73f4\" returns successfully" Oct 8 19:57:49.278324 containerd[1592]: time="2024-10-08T19:57:49.278010950Z" level=info msg="StartContainer for \"8143391bf55fa45bb72dd1d28f5eaab5a95c6610945823396afa56b6d2f0ca83\" returns successfully" Oct 8 19:57:49.870297 kubelet[2414]: E1008 19:57:49.867064 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:49.872288 kubelet[2414]: E1008 19:57:49.872252 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:49.876824 kubelet[2414]: E1008 19:57:49.876793 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:50.063786 kubelet[2414]: E1008 19:57:50.063737 2414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:57:50.128129 kubelet[2414]: I1008 19:57:50.127970 2414 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:50.277417 kubelet[2414]: I1008 19:57:50.277375 2414 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:57:50.324054 kubelet[2414]: E1008 19:57:50.322595 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.423419 kubelet[2414]: E1008 19:57:50.423245 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.524081 kubelet[2414]: E1008 19:57:50.524022 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.624839 kubelet[2414]: E1008 19:57:50.624772 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.725729 kubelet[2414]: E1008 19:57:50.725589 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.825809 kubelet[2414]: E1008 19:57:50.825758 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.882660 kubelet[2414]: E1008 19:57:50.880795 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:50.882660 kubelet[2414]: E1008 19:57:50.880846 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:50.882660 kubelet[2414]: E1008 19:57:50.881055 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:50.926056 kubelet[2414]: E1008 19:57:50.925990 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.027151 kubelet[2414]: E1008 19:57:51.026996 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.127594 kubelet[2414]: E1008 19:57:51.127516 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.228086 kubelet[2414]: E1008 19:57:51.228047 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.328862 kubelet[2414]: E1008 19:57:51.328799 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.429344 kubelet[2414]: E1008 19:57:51.429291 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.530040 kubelet[2414]: E1008 19:57:51.529984 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.630914 kubelet[2414]: E1008 19:57:51.630763 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.731935 kubelet[2414]: E1008 19:57:51.731881 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.832548 kubelet[2414]: E1008 19:57:51.832488 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.881690 kubelet[2414]: E1008 19:57:51.881570 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:51.933420 kubelet[2414]: E1008 19:57:51.933363 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.034408 kubelet[2414]: E1008 19:57:52.034331 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.135200 kubelet[2414]: E1008 19:57:52.135003 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.235801 kubelet[2414]: E1008 19:57:52.235736 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.336587 kubelet[2414]: E1008 19:57:52.336526 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.437318 kubelet[2414]: E1008 19:57:52.437138 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.537834 kubelet[2414]: E1008 19:57:52.537779 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.638439 kubelet[2414]: E1008 19:57:52.638374 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.739346 kubelet[2414]: E1008 19:57:52.739188 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.839380 kubelet[2414]: E1008 19:57:52.839334 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:52.883539 kubelet[2414]: E1008 19:57:52.883480 2414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:52.940458 kubelet[2414]: E1008 19:57:52.940384 2414 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:53.524388 systemd[1]: Reloading requested from client PID 2712 ('systemctl') (unit session-7.scope)... Oct 8 19:57:53.524406 systemd[1]: Reloading... Oct 8 19:57:53.612304 zram_generator::config[2754]: No configuration found. Oct 8 19:57:53.736632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:53.789601 kubelet[2414]: I1008 19:57:53.789428 2414 apiserver.go:52] "Watching apiserver" Oct 8 19:57:53.814616 kubelet[2414]: I1008 19:57:53.814572 2414 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:57:53.825029 systemd[1]: Reloading finished in 300 ms. Oct 8 19:57:53.865134 kubelet[2414]: I1008 19:57:53.865030 2414 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:57:53.865143 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:53.883875 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:57:53.884427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:53.893523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:54.052715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:54.058225 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:57:54.108118 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:57:54.108118 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:57:54.108118 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:57:54.109128 kubelet[2806]: I1008 19:57:54.109075 2806 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:57:54.114603 kubelet[2806]: I1008 19:57:54.114565 2806 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:57:54.114603 kubelet[2806]: I1008 19:57:54.114595 2806 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:57:54.114829 kubelet[2806]: I1008 19:57:54.114806 2806 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:57:54.116732 kubelet[2806]: I1008 19:57:54.116709 2806 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:57:54.118987 kubelet[2806]: I1008 19:57:54.118965 2806 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:57:54.127640 kubelet[2806]: I1008 19:57:54.127582 2806 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:57:54.128343 kubelet[2806]: I1008 19:57:54.128322 2806 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:57:54.128547 kubelet[2806]: I1008 19:57:54.128517 2806 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:57:54.128632 kubelet[2806]: I1008 19:57:54.128556 2806 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:57:54.128632 kubelet[2806]: I1008 19:57:54.128567 2806 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:57:54.128632 kubelet[2806]: I1008 19:57:54.128615 2806 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:57:54.128745 kubelet[2806]: I1008 19:57:54.128740 2806 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:57:54.128789 kubelet[2806]: I1008 19:57:54.128768 2806 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:57:54.128815 kubelet[2806]: I1008 19:57:54.128797 2806 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:57:54.128815 kubelet[2806]: I1008 19:57:54.128811 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:57:54.130301 kubelet[2806]: I1008 19:57:54.129924 2806 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:57:54.130301 kubelet[2806]: I1008 19:57:54.130213 2806 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:57:54.130809 kubelet[2806]: I1008 19:57:54.130790 2806 server.go:1256] "Started kubelet" Oct 8 19:57:54.131294 kubelet[2806]: I1008 19:57:54.131249 2806 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:57:54.132156 kubelet[2806]: I1008 19:57:54.132133 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:57:54.132454 kubelet[2806]: I1008 19:57:54.132435 2806 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:57:54.133150 kubelet[2806]: I1008 19:57:54.133130 2806 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:57:54.144399 kubelet[2806]: E1008 19:57:54.144365 2806 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:57:54.145219 kubelet[2806]: I1008 19:57:54.145193 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:57:54.148737 kubelet[2806]: I1008 19:57:54.148712 2806 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:57:54.149449 kubelet[2806]: I1008 19:57:54.149294 2806 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:57:54.151723 kubelet[2806]: I1008 19:57:54.149647 2806 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:57:54.151723 kubelet[2806]: I1008 19:57:54.150640 2806 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:57:54.155253 kubelet[2806]: I1008 19:57:54.154424 2806 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:57:54.155253 kubelet[2806]: I1008 19:57:54.154455 2806 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:57:54.167376 kubelet[2806]: I1008 19:57:54.167253 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:57:54.169209 kubelet[2806]: I1008 19:57:54.169165 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:57:54.170181 kubelet[2806]: I1008 19:57:54.169814 2806 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:57:54.170181 kubelet[2806]: I1008 19:57:54.169850 2806 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:57:54.170181 kubelet[2806]: E1008 19:57:54.169932 2806 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:57:54.179818 sudo[2830]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 19:57:54.180410 sudo[2830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 8 19:57:54.218775 kubelet[2806]: I1008 19:57:54.218739 2806 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:57:54.218775 kubelet[2806]: I1008 19:57:54.218767 2806 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:57:54.218775 kubelet[2806]: I1008 19:57:54.218797 2806 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:57:54.218988 kubelet[2806]: I1008 19:57:54.218954 2806 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:57:54.218988 kubelet[2806]: I1008 19:57:54.218977 2806 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:57:54.218988 kubelet[2806]: I1008 19:57:54.218985 2806 policy_none.go:49] "None policy: Start" Oct 8 19:57:54.220196 kubelet[2806]: I1008 19:57:54.219743 2806 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:57:54.220196 kubelet[2806]: I1008 19:57:54.219778 2806 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:57:54.220196 kubelet[2806]: I1008 19:57:54.219962 2806 state_mem.go:75] "Updated machine memory state" Oct 8 19:57:54.222136 kubelet[2806]: I1008 19:57:54.221734 2806 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:57:54.223378 kubelet[2806]: I1008 19:57:54.223343 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:57:54.270390 kubelet[2806]: I1008 19:57:54.270314 2806 topology_manager.go:215] "Topology Admit Handler" podUID="347f4080e7d500b752556e5738039e14" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:57:54.270545 kubelet[2806]: I1008 19:57:54.270418 2806 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:57:54.270545 kubelet[2806]: I1008 19:57:54.270461 2806 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:57:54.331949 kubelet[2806]: I1008 19:57:54.331835 2806 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:54.350841 kubelet[2806]: I1008 19:57:54.350789 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:54.350841 kubelet[2806]: I1008 19:57:54.350845 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:54.350841 kubelet[2806]: I1008 19:57:54.350875 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:54.351062 kubelet[2806]: I1008 19:57:54.350902 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:54.351062 kubelet[2806]: I1008 19:57:54.350929 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/347f4080e7d500b752556e5738039e14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"347f4080e7d500b752556e5738039e14\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:54.351062 kubelet[2806]: I1008 19:57:54.350957 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/347f4080e7d500b752556e5738039e14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"347f4080e7d500b752556e5738039e14\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:54.351062 kubelet[2806]: I1008 19:57:54.350982 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:54.351062 kubelet[2806]: I1008 19:57:54.351006 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:57:54.351179 kubelet[2806]: I1008 19:57:54.351042 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/347f4080e7d500b752556e5738039e14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"347f4080e7d500b752556e5738039e14\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:54.483529 kubelet[2806]: E1008 19:57:54.483486 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:54.486085 kubelet[2806]: I1008 19:57:54.486039 2806 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:57:54.486225 kubelet[2806]: I1008 19:57:54.486153 2806 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:57:54.678416 sudo[2830]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:54.727098 kubelet[2806]: E1008 19:57:54.727064 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:54.738924 kubelet[2806]: E1008 19:57:54.738720 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:55.129416 kubelet[2806]: I1008 19:57:55.129353 2806 apiserver.go:52] "Watching apiserver" Oct 8 19:57:55.149838 kubelet[2806]: I1008 19:57:55.149751 2806 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:57:55.183252 kubelet[2806]: I1008 19:57:55.183202 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.183081335 podStartE2EDuration="1.183081335s" podCreationTimestamp="2024-10-08 19:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:57:55.182560562 +0000 UTC m=+1.119324028" watchObservedRunningTime="2024-10-08 19:57:55.183081335 +0000 UTC m=+1.119844801" Oct 8 19:57:55.183490 kubelet[2806]: I1008 19:57:55.183355 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.183333384 podStartE2EDuration="1.183333384s" podCreationTimestamp="2024-10-08 19:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:57:55.17428667 +0000 UTC m=+1.111050126" watchObservedRunningTime="2024-10-08 19:57:55.183333384 +0000 UTC m=+1.120096850" Oct 8 19:57:55.185312 kubelet[2806]: E1008 19:57:55.184972 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:55.185312 kubelet[2806]: E1008 19:57:55.185165 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:55.185782 kubelet[2806]: E1008 19:57:55.185743 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:55.194801 kubelet[2806]: I1008 19:57:55.194739 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.194693231 podStartE2EDuration="1.194693231s" podCreationTimestamp="2024-10-08 19:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:57:55.194377439 +0000 UTC m=+1.131140905" watchObservedRunningTime="2024-10-08 19:57:55.194693231 +0000 UTC m=+1.131456697" Oct 8 19:57:56.186433 kubelet[2806]: E1008 19:57:56.186385 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:56.402798 sudo[1787]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:56.407156 sshd[1780]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:56.411896 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:45224.service: Deactivated successfully. Oct 8 19:57:56.414915 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:57:56.415596 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:57:56.416584 systemd-logind[1573]: Removed session 7. Oct 8 19:57:56.818637 kubelet[2806]: E1008 19:57:56.818604 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:59.395247 kubelet[2806]: E1008 19:57:59.395198 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:00.192553 kubelet[2806]: E1008 19:58:00.192503 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:02.708279 kubelet[2806]: E1008 19:58:02.708208 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:03.196066 kubelet[2806]: E1008 19:58:03.196023 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:06.822992 kubelet[2806]: E1008 19:58:06.822901 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:06.877977 kubelet[2806]: I1008 19:58:06.877904 2806 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:58:06.879209 kubelet[2806]: I1008 19:58:06.878460 2806 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:58:06.879631 containerd[1592]: time="2024-10-08T19:58:06.878285324Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:58:06.902882 kubelet[2806]: I1008 19:58:06.902832 2806 topology_manager.go:215] "Topology Admit Handler" podUID="786d4c60-44de-464c-b99d-0a36df9c0133" podNamespace="kube-system" podName="kube-proxy-gm6xv" Oct 8 19:58:06.906550 kubelet[2806]: I1008 19:58:06.906514 2806 topology_manager.go:215] "Topology Admit Handler" podUID="e5baa5af-a53c-484c-ae9d-a661f7bf40eb" podNamespace="kube-system" podName="cilium-fd7p6" Oct 8 19:58:06.924785 kubelet[2806]: I1008 19:58:06.924689 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/786d4c60-44de-464c-b99d-0a36df9c0133-kube-proxy\") pod \"kube-proxy-gm6xv\" (UID: \"786d4c60-44de-464c-b99d-0a36df9c0133\") " pod="kube-system/kube-proxy-gm6xv" Oct 8 19:58:06.924785 kubelet[2806]: I1008 19:58:06.924751 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-clustermesh-secrets\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925119 kubelet[2806]: I1008 19:58:06.925071 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-hostproc\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925119 kubelet[2806]: I1008 19:58:06.925107 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xqf2\" (UniqueName: \"kubernetes.io/projected/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-kube-api-access-9xqf2\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925119 kubelet[2806]: I1008 19:58:06.925126 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cni-path\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925337 kubelet[2806]: I1008 19:58:06.925148 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-host-proc-sys-net\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925337 kubelet[2806]: I1008 19:58:06.925167 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-host-proc-sys-kernel\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925337 kubelet[2806]: I1008 19:58:06.925186 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-hubble-tls\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925337 kubelet[2806]: I1008 19:58:06.925204 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25hc\" (UniqueName: \"kubernetes.io/projected/786d4c60-44de-464c-b99d-0a36df9c0133-kube-api-access-m25hc\") pod \"kube-proxy-gm6xv\" (UID: \"786d4c60-44de-464c-b99d-0a36df9c0133\") " pod="kube-system/kube-proxy-gm6xv" Oct 8 19:58:06.925337 kubelet[2806]: I1008 19:58:06.925221 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-bpf-maps\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925530 kubelet[2806]: I1008 19:58:06.925239 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-config-path\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925530 kubelet[2806]: I1008 19:58:06.925288 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/786d4c60-44de-464c-b99d-0a36df9c0133-xtables-lock\") pod \"kube-proxy-gm6xv\" (UID: \"786d4c60-44de-464c-b99d-0a36df9c0133\") " pod="kube-system/kube-proxy-gm6xv" Oct 8 19:58:06.925530 kubelet[2806]: I1008 19:58:06.925312 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-run\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925530 kubelet[2806]: I1008 19:58:06.925330 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-xtables-lock\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925530 kubelet[2806]: I1008 19:58:06.925348 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-lib-modules\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925530 kubelet[2806]: I1008 19:58:06.925368 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/786d4c60-44de-464c-b99d-0a36df9c0133-lib-modules\") pod \"kube-proxy-gm6xv\" (UID: \"786d4c60-44de-464c-b99d-0a36df9c0133\") " pod="kube-system/kube-proxy-gm6xv" Oct 8 19:58:06.925711 kubelet[2806]: I1008 19:58:06.925395 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-cgroup\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.925711 kubelet[2806]: I1008 19:58:06.925416 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-etc-cni-netd\") pod \"cilium-fd7p6\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " pod="kube-system/cilium-fd7p6" Oct 8 19:58:06.964483 kubelet[2806]: I1008 19:58:06.964445 2806 topology_manager.go:215] "Topology Admit Handler" podUID="ffaec8a5-b623-4d7d-a01f-77cccbad00fd" podNamespace="kube-system" podName="cilium-operator-5cc964979-4rbfw" Oct 8 19:58:07.026487 kubelet[2806]: I1008 19:58:07.026434 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffaec8a5-b623-4d7d-a01f-77cccbad00fd-cilium-config-path\") pod \"cilium-operator-5cc964979-4rbfw\" (UID: \"ffaec8a5-b623-4d7d-a01f-77cccbad00fd\") " pod="kube-system/cilium-operator-5cc964979-4rbfw" Oct 8 19:58:07.026622 kubelet[2806]: I1008 19:58:07.026507 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjq5r\" (UniqueName: \"kubernetes.io/projected/ffaec8a5-b623-4d7d-a01f-77cccbad00fd-kube-api-access-fjq5r\") pod \"cilium-operator-5cc964979-4rbfw\" (UID: \"ffaec8a5-b623-4d7d-a01f-77cccbad00fd\") " pod="kube-system/cilium-operator-5cc964979-4rbfw" Oct 8 19:58:07.215257 kubelet[2806]: E1008 19:58:07.215110 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.216162 kubelet[2806]: E1008 19:58:07.215693 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.216227 containerd[1592]: time="2024-10-08T19:58:07.215861442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fd7p6,Uid:e5baa5af-a53c-484c-ae9d-a661f7bf40eb,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:07.216227 containerd[1592]: time="2024-10-08T19:58:07.216093331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gm6xv,Uid:786d4c60-44de-464c-b99d-0a36df9c0133,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:07.269361 containerd[1592]: time="2024-10-08T19:58:07.269172645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:07.269361 containerd[1592]: time="2024-10-08T19:58:07.269235097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:07.269361 containerd[1592]: time="2024-10-08T19:58:07.269292357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:07.270197 containerd[1592]: time="2024-10-08T19:58:07.270003358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:07.270433 containerd[1592]: time="2024-10-08T19:58:07.270349239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:07.270589 containerd[1592]: time="2024-10-08T19:58:07.270513336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:07.270644 containerd[1592]: time="2024-10-08T19:58:07.270598736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:07.271693 containerd[1592]: time="2024-10-08T19:58:07.271598256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:07.273176 kubelet[2806]: E1008 19:58:07.273147 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.274669 containerd[1592]: time="2024-10-08T19:58:07.274615916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4rbfw,Uid:ffaec8a5-b623-4d7d-a01f-77cccbad00fd,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:07.305528 containerd[1592]: time="2024-10-08T19:58:07.305192665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:07.305528 containerd[1592]: time="2024-10-08T19:58:07.305245507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:07.305528 containerd[1592]: time="2024-10-08T19:58:07.305289911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:07.305528 containerd[1592]: time="2024-10-08T19:58:07.305410425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:07.323711 containerd[1592]: time="2024-10-08T19:58:07.323655802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fd7p6,Uid:e5baa5af-a53c-484c-ae9d-a661f7bf40eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\"" Oct 8 19:58:07.323830 containerd[1592]: time="2024-10-08T19:58:07.323766225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gm6xv,Uid:786d4c60-44de-464c-b99d-0a36df9c0133,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1d93b94d737804560a8b0f6e097c05043d3470fe2c7a6935310e9777e075619\"" Oct 8 19:58:07.325189 kubelet[2806]: E1008 19:58:07.324823 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.325189 kubelet[2806]: E1008 19:58:07.325086 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.328057 containerd[1592]: time="2024-10-08T19:58:07.327435893Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 19:58:07.328317 containerd[1592]: time="2024-10-08T19:58:07.328286939Z" level=info msg="CreateContainer within sandbox \"d1d93b94d737804560a8b0f6e097c05043d3470fe2c7a6935310e9777e075619\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:58:07.356336 containerd[1592]: time="2024-10-08T19:58:07.356233516Z" level=info msg="CreateContainer within sandbox \"d1d93b94d737804560a8b0f6e097c05043d3470fe2c7a6935310e9777e075619\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad8b4049c60c57cd3b805e040a1a656dc08a4db7de5a99b2a1ae162b12e231f8\"" Oct 8 19:58:07.357443 containerd[1592]: time="2024-10-08T19:58:07.357416534Z" level=info msg="StartContainer for \"ad8b4049c60c57cd3b805e040a1a656dc08a4db7de5a99b2a1ae162b12e231f8\"" Oct 8 19:58:07.374669 containerd[1592]: time="2024-10-08T19:58:07.374622647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4rbfw,Uid:ffaec8a5-b623-4d7d-a01f-77cccbad00fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d\"" Oct 8 19:58:07.375409 kubelet[2806]: E1008 19:58:07.375390 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.434679 containerd[1592]: time="2024-10-08T19:58:07.434617799Z" level=info msg="StartContainer for \"ad8b4049c60c57cd3b805e040a1a656dc08a4db7de5a99b2a1ae162b12e231f8\" returns successfully" Oct 8 19:58:08.205273 kubelet[2806]: E1008 19:58:08.205221 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:12.075590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107967617.mount: Deactivated successfully. Oct 8 19:58:13.133379 systemd-resolved[1475]: Under memory pressure, flushing caches. Oct 8 19:58:13.133432 systemd-resolved[1475]: Flushed all caches. Oct 8 19:58:13.135362 systemd-journald[1157]: Under memory pressure, flushing caches. Oct 8 19:58:14.886431 containerd[1592]: time="2024-10-08T19:58:14.886372001Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:14.887234 containerd[1592]: time="2024-10-08T19:58:14.887202677Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735347" Oct 8 19:58:14.888682 containerd[1592]: time="2024-10-08T19:58:14.888632983Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:14.890199 containerd[1592]: time="2024-10-08T19:58:14.890161722Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.561976559s" Oct 8 19:58:14.890199 containerd[1592]: time="2024-10-08T19:58:14.890197295Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 8 19:58:14.890812 containerd[1592]: time="2024-10-08T19:58:14.890774809Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 19:58:14.892771 containerd[1592]: time="2024-10-08T19:58:14.892605994Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:58:14.907189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892417251.mount: Deactivated successfully. Oct 8 19:58:14.922663 containerd[1592]: time="2024-10-08T19:58:14.922598944Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\"" Oct 8 19:58:14.923191 containerd[1592]: time="2024-10-08T19:58:14.923157669Z" level=info msg="StartContainer for \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\"" Oct 8 19:58:15.181402 systemd-resolved[1475]: Under memory pressure, flushing caches. Oct 8 19:58:15.181430 systemd-resolved[1475]: Flushed all caches. Oct 8 19:58:15.183298 systemd-journald[1157]: Under memory pressure, flushing caches. Oct 8 19:58:15.221065 containerd[1592]: time="2024-10-08T19:58:15.221009502Z" level=info msg="StartContainer for \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\" returns successfully" Oct 8 19:58:15.586876 kubelet[2806]: E1008 19:58:15.585645 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:15.636947 containerd[1592]: time="2024-10-08T19:58:15.635051622Z" level=info msg="shim disconnected" id=fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220 namespace=k8s.io Oct 8 19:58:15.636947 containerd[1592]: time="2024-10-08T19:58:15.636941930Z" level=warning msg="cleaning up after shim disconnected" id=fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220 namespace=k8s.io Oct 8 19:58:15.637164 containerd[1592]: time="2024-10-08T19:58:15.636978495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:15.644295 kubelet[2806]: I1008 19:58:15.644237 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gm6xv" podStartSLOduration=9.643823446 podStartE2EDuration="9.643823446s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:08.315997232 +0000 UTC m=+14.252760698" watchObservedRunningTime="2024-10-08 19:58:15.643823446 +0000 UTC m=+21.580586913" Oct 8 19:58:15.904418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220-rootfs.mount: Deactivated successfully. Oct 8 19:58:16.587730 kubelet[2806]: E1008 19:58:16.587705 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:16.590171 containerd[1592]: time="2024-10-08T19:58:16.589945000Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:58:17.255116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272972241.mount: Deactivated successfully. Oct 8 19:58:18.205845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734227821.mount: Deactivated successfully. Oct 8 19:58:18.880521 containerd[1592]: time="2024-10-08T19:58:18.880435102Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\"" Oct 8 19:58:18.881177 containerd[1592]: time="2024-10-08T19:58:18.881128833Z" level=info msg="StartContainer for \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\"" Oct 8 19:58:18.956785 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:58:18.957239 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:58:18.957337 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:58:18.963683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:58:18.970117 containerd[1592]: time="2024-10-08T19:58:18.970064435Z" level=info msg="StartContainer for \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\" returns successfully" Oct 8 19:58:18.983058 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:58:18.990243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0-rootfs.mount: Deactivated successfully. Oct 8 19:58:19.036037 containerd[1592]: time="2024-10-08T19:58:19.035933995Z" level=info msg="shim disconnected" id=4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0 namespace=k8s.io Oct 8 19:58:19.036037 containerd[1592]: time="2024-10-08T19:58:19.035986903Z" level=warning msg="cleaning up after shim disconnected" id=4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0 namespace=k8s.io Oct 8 19:58:19.036037 containerd[1592]: time="2024-10-08T19:58:19.035995681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:19.282620 containerd[1592]: time="2024-10-08T19:58:19.282458959Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:19.288221 containerd[1592]: time="2024-10-08T19:58:19.288147342Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907229" Oct 8 19:58:19.304357 containerd[1592]: time="2024-10-08T19:58:19.304306218Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:19.305737 containerd[1592]: time="2024-10-08T19:58:19.305699497Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.414737491s" Oct 8 19:58:19.305853 containerd[1592]: time="2024-10-08T19:58:19.305739229Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 8 19:58:19.307667 containerd[1592]: time="2024-10-08T19:58:19.307623472Z" level=info msg="CreateContainer within sandbox \"fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 19:58:19.375733 containerd[1592]: time="2024-10-08T19:58:19.375667785Z" level=info msg="CreateContainer within sandbox \"fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\"" Oct 8 19:58:19.376419 containerd[1592]: time="2024-10-08T19:58:19.376302443Z" level=info msg="StartContainer for \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\"" Oct 8 19:58:19.453175 containerd[1592]: time="2024-10-08T19:58:19.453127174Z" level=info msg="StartContainer for \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\" returns successfully" Oct 8 19:58:19.599358 kubelet[2806]: E1008 19:58:19.598370 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:19.599358 kubelet[2806]: E1008 19:58:19.599131 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:19.602214 containerd[1592]: time="2024-10-08T19:58:19.602180665Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:58:19.899550 kubelet[2806]: I1008 19:58:19.897826 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-4rbfw" podStartSLOduration=1.967903803 podStartE2EDuration="13.897789938s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="2024-10-08 19:58:07.376156895 +0000 UTC m=+13.312920361" lastFinishedPulling="2024-10-08 19:58:19.30604303 +0000 UTC m=+25.242806496" observedRunningTime="2024-10-08 19:58:19.897627075 +0000 UTC m=+25.834390541" watchObservedRunningTime="2024-10-08 19:58:19.897789938 +0000 UTC m=+25.834553404" Oct 8 19:58:20.061657 containerd[1592]: time="2024-10-08T19:58:20.061579392Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\"" Oct 8 19:58:20.063300 containerd[1592]: time="2024-10-08T19:58:20.062486725Z" level=info msg="StartContainer for \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\"" Oct 8 19:58:20.117088 systemd[1]: run-containerd-runc-k8s.io-635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3-runc.I5lKgZ.mount: Deactivated successfully. Oct 8 19:58:20.201480 containerd[1592]: time="2024-10-08T19:58:20.201255263Z" level=info msg="StartContainer for \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\" returns successfully" Oct 8 19:58:20.285125 containerd[1592]: time="2024-10-08T19:58:20.285057115Z" level=info msg="shim disconnected" id=635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3 namespace=k8s.io Oct 8 19:58:20.285125 containerd[1592]: time="2024-10-08T19:58:20.285164705Z" level=warning msg="cleaning up after shim disconnected" id=635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3 namespace=k8s.io Oct 8 19:58:20.285125 containerd[1592]: time="2024-10-08T19:58:20.285178863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:20.632728 kubelet[2806]: E1008 19:58:20.632669 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:20.633452 kubelet[2806]: E1008 19:58:20.633083 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:20.636293 containerd[1592]: time="2024-10-08T19:58:20.636214248Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:58:20.900382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3-rootfs.mount: Deactivated successfully. Oct 8 19:58:21.032068 containerd[1592]: time="2024-10-08T19:58:21.031968524Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\"" Oct 8 19:58:21.032745 containerd[1592]: time="2024-10-08T19:58:21.032700235Z" level=info msg="StartContainer for \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\"" Oct 8 19:58:21.213348 containerd[1592]: time="2024-10-08T19:58:21.213178278Z" level=info msg="StartContainer for \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\" returns successfully" Oct 8 19:58:21.231102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f-rootfs.mount: Deactivated successfully. Oct 8 19:58:21.286940 containerd[1592]: time="2024-10-08T19:58:21.286768911Z" level=info msg="shim disconnected" id=667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f namespace=k8s.io Oct 8 19:58:21.286940 containerd[1592]: time="2024-10-08T19:58:21.286879126Z" level=warning msg="cleaning up after shim disconnected" id=667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f namespace=k8s.io Oct 8 19:58:21.286940 containerd[1592]: time="2024-10-08T19:58:21.286900208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:21.637831 kubelet[2806]: E1008 19:58:21.637801 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:21.641234 containerd[1592]: time="2024-10-08T19:58:21.641194554Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:58:21.664429 containerd[1592]: time="2024-10-08T19:58:21.664373858Z" level=info msg="CreateContainer within sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\"" Oct 8 19:58:21.665140 containerd[1592]: time="2024-10-08T19:58:21.665093976Z" level=info msg="StartContainer for \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\"" Oct 8 19:58:21.734204 containerd[1592]: time="2024-10-08T19:58:21.734145617Z" level=info msg="StartContainer for \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\" returns successfully" Oct 8 19:58:21.848079 kubelet[2806]: I1008 19:58:21.848040 2806 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:58:22.403476 kubelet[2806]: I1008 19:58:22.403429 2806 topology_manager.go:215] "Topology Admit Handler" podUID="52e40fe5-9a51-4e90-8bd1-971012304392" podNamespace="kube-system" podName="coredns-76f75df574-xcqrl" Oct 8 19:58:22.405746 kubelet[2806]: I1008 19:58:22.405574 2806 topology_manager.go:215] "Topology Admit Handler" podUID="65d04d63-1dec-4677-860c-170f3db3f06a" podNamespace="kube-system" podName="coredns-76f75df574-l9pqg" Oct 8 19:58:22.548944 kubelet[2806]: I1008 19:58:22.548900 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52e40fe5-9a51-4e90-8bd1-971012304392-config-volume\") pod \"coredns-76f75df574-xcqrl\" (UID: \"52e40fe5-9a51-4e90-8bd1-971012304392\") " pod="kube-system/coredns-76f75df574-xcqrl" Oct 8 19:58:22.548944 kubelet[2806]: I1008 19:58:22.548953 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65d04d63-1dec-4677-860c-170f3db3f06a-config-volume\") pod \"coredns-76f75df574-l9pqg\" (UID: \"65d04d63-1dec-4677-860c-170f3db3f06a\") " pod="kube-system/coredns-76f75df574-l9pqg" Oct 8 19:58:22.549181 kubelet[2806]: I1008 19:58:22.548991 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhz4l\" (UniqueName: \"kubernetes.io/projected/65d04d63-1dec-4677-860c-170f3db3f06a-kube-api-access-xhz4l\") pod \"coredns-76f75df574-l9pqg\" (UID: \"65d04d63-1dec-4677-860c-170f3db3f06a\") " pod="kube-system/coredns-76f75df574-l9pqg" Oct 8 19:58:22.549181 kubelet[2806]: I1008 19:58:22.549087 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zrt\" (UniqueName: \"kubernetes.io/projected/52e40fe5-9a51-4e90-8bd1-971012304392-kube-api-access-85zrt\") pod \"coredns-76f75df574-xcqrl\" (UID: \"52e40fe5-9a51-4e90-8bd1-971012304392\") " pod="kube-system/coredns-76f75df574-xcqrl" Oct 8 19:58:22.652571 kubelet[2806]: E1008 19:58:22.652519 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:23.008885 kubelet[2806]: E1008 19:58:23.008820 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:23.009776 containerd[1592]: time="2024-10-08T19:58:23.009723028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xcqrl,Uid:52e40fe5-9a51-4e90-8bd1-971012304392,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:23.010408 kubelet[2806]: E1008 19:58:23.010390 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:23.010973 containerd[1592]: time="2024-10-08T19:58:23.010923026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l9pqg,Uid:65d04d63-1dec-4677-860c-170f3db3f06a,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:23.392730 kubelet[2806]: I1008 19:58:23.392680 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fd7p6" podStartSLOduration=9.828706772 podStartE2EDuration="17.39257083s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="2024-10-08 19:58:07.326633999 +0000 UTC m=+13.263397465" lastFinishedPulling="2024-10-08 19:58:14.890498057 +0000 UTC m=+20.827261523" observedRunningTime="2024-10-08 19:58:23.12832508 +0000 UTC m=+29.065088546" watchObservedRunningTime="2024-10-08 19:58:23.39257083 +0000 UTC m=+29.329334296" Oct 8 19:58:23.630518 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:54122.service - OpenSSH per-connection server daemon (10.0.0.1:54122). Oct 8 19:58:23.655139 kubelet[2806]: E1008 19:58:23.654962 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:23.739338 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 54122 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:23.741256 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:23.745425 systemd-logind[1573]: New session 8 of user core. Oct 8 19:58:23.751537 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:58:24.435587 sshd[3600]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:24.440293 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:54122.service: Deactivated successfully. Oct 8 19:58:24.442711 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:58:24.443627 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:58:24.444671 systemd-logind[1573]: Removed session 8. Oct 8 19:58:24.495665 systemd-networkd[1246]: cilium_host: Link UP Oct 8 19:58:24.495866 systemd-networkd[1246]: cilium_net: Link UP Oct 8 19:58:24.496123 systemd-networkd[1246]: cilium_net: Gained carrier Oct 8 19:58:24.496393 systemd-networkd[1246]: cilium_host: Gained carrier Oct 8 19:58:24.501744 systemd-networkd[1246]: cilium_host: Gained IPv6LL Oct 8 19:58:24.629963 systemd-networkd[1246]: cilium_vxlan: Link UP Oct 8 19:58:24.629975 systemd-networkd[1246]: cilium_vxlan: Gained carrier Oct 8 19:58:24.656287 kubelet[2806]: E1008 19:58:24.656233 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:24.903297 kernel: NET: Registered PF_ALG protocol family Oct 8 19:58:25.422416 systemd-networkd[1246]: cilium_net: Gained IPv6LL Oct 8 19:58:25.690804 systemd-networkd[1246]: lxc_health: Link UP Oct 8 19:58:25.699734 systemd-networkd[1246]: lxc_health: Gained carrier Oct 8 19:58:25.805486 systemd-networkd[1246]: cilium_vxlan: Gained IPv6LL Oct 8 19:58:25.900436 systemd-networkd[1246]: lxca991e15d0d5a: Link UP Oct 8 19:58:25.909315 kernel: eth0: renamed from tmp67172 Oct 8 19:58:25.916086 systemd-networkd[1246]: lxca991e15d0d5a: Gained carrier Oct 8 19:58:26.204209 systemd-networkd[1246]: lxc23cb1bccf9b3: Link UP Oct 8 19:58:26.212410 kernel: eth0: renamed from tmpd921e Oct 8 19:58:26.216255 systemd-networkd[1246]: lxc23cb1bccf9b3: Gained carrier Oct 8 19:58:27.214531 systemd-networkd[1246]: lxc_health: Gained IPv6LL Oct 8 19:58:27.218227 kubelet[2806]: E1008 19:58:27.218116 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:27.405918 systemd-networkd[1246]: lxc23cb1bccf9b3: Gained IPv6LL Oct 8 19:58:27.660404 kubelet[2806]: E1008 19:58:27.660375 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:27.725484 systemd-networkd[1246]: lxca991e15d0d5a: Gained IPv6LL Oct 8 19:58:29.448616 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:54132.service - OpenSSH per-connection server daemon (10.0.0.1:54132). Oct 8 19:58:29.492790 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 54132 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:29.495005 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:29.499646 systemd-logind[1573]: New session 9 of user core. Oct 8 19:58:29.508670 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:58:29.673636 sshd[4025]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:29.680298 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:54132.service: Deactivated successfully. Oct 8 19:58:29.692187 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:58:29.693928 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:58:29.695411 systemd-logind[1573]: Removed session 9. Oct 8 19:58:29.970355 containerd[1592]: time="2024-10-08T19:58:29.970237410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:29.970355 containerd[1592]: time="2024-10-08T19:58:29.970321019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:29.970355 containerd[1592]: time="2024-10-08T19:58:29.970331881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:29.970975 containerd[1592]: time="2024-10-08T19:58:29.970408085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:29.995506 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:58:30.021519 containerd[1592]: time="2024-10-08T19:58:30.021481985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xcqrl,Uid:52e40fe5-9a51-4e90-8bd1-971012304392,Namespace:kube-system,Attempt:0,} returns sandbox id \"d921e45692b5e34a22be66439fc2d3690b50efe2f8bb6b8150f07beea32c0a1e\"" Oct 8 19:58:30.022367 kubelet[2806]: E1008 19:58:30.022346 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:30.024502 containerd[1592]: time="2024-10-08T19:58:30.024472595Z" level=info msg="CreateContainer within sandbox \"d921e45692b5e34a22be66439fc2d3690b50efe2f8bb6b8150f07beea32c0a1e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:58:30.039684 containerd[1592]: time="2024-10-08T19:58:30.039256574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:30.039684 containerd[1592]: time="2024-10-08T19:58:30.039434783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:30.039684 containerd[1592]: time="2024-10-08T19:58:30.039482911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:30.039684 containerd[1592]: time="2024-10-08T19:58:30.039663625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:30.068643 containerd[1592]: time="2024-10-08T19:58:30.068578718Z" level=info msg="CreateContainer within sandbox \"d921e45692b5e34a22be66439fc2d3690b50efe2f8bb6b8150f07beea32c0a1e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d0fbc240e14ce39570672afe825b65c728a2f06fdab3664c00842afdadeabda\"" Oct 8 19:58:30.069497 containerd[1592]: time="2024-10-08T19:58:30.069438422Z" level=info msg="StartContainer for \"6d0fbc240e14ce39570672afe825b65c728a2f06fdab3664c00842afdadeabda\"" Oct 8 19:58:30.071023 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:58:30.099963 containerd[1592]: time="2024-10-08T19:58:30.099918650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l9pqg,Uid:65d04d63-1dec-4677-860c-170f3db3f06a,Namespace:kube-system,Attempt:0,} returns sandbox id \"671725c9f20fdca2d14f60af9050a1ae2bcafd0938f7ade11d9d84c77aff7fb4\"" Oct 8 19:58:30.100793 kubelet[2806]: E1008 19:58:30.100763 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:30.103297 containerd[1592]: time="2024-10-08T19:58:30.103015705Z" level=info msg="CreateContainer within sandbox \"671725c9f20fdca2d14f60af9050a1ae2bcafd0938f7ade11d9d84c77aff7fb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:58:30.120599 containerd[1592]: time="2024-10-08T19:58:30.120538678Z" level=info msg="CreateContainer within sandbox \"671725c9f20fdca2d14f60af9050a1ae2bcafd0938f7ade11d9d84c77aff7fb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2a38eb591a34cfca17daa8af1c1206bcaf70998431a41ef88b8d287664d08b5\"" Oct 8 19:58:30.121928 containerd[1592]: time="2024-10-08T19:58:30.121899271Z" level=info msg="StartContainer for \"b2a38eb591a34cfca17daa8af1c1206bcaf70998431a41ef88b8d287664d08b5\"" Oct 8 19:58:30.146657 containerd[1592]: time="2024-10-08T19:58:30.146615106Z" level=info msg="StartContainer for \"6d0fbc240e14ce39570672afe825b65c728a2f06fdab3664c00842afdadeabda\" returns successfully" Oct 8 19:58:30.183860 containerd[1592]: time="2024-10-08T19:58:30.183805625Z" level=info msg="StartContainer for \"b2a38eb591a34cfca17daa8af1c1206bcaf70998431a41ef88b8d287664d08b5\" returns successfully" Oct 8 19:58:30.667917 kubelet[2806]: E1008 19:58:30.667817 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:30.670986 kubelet[2806]: E1008 19:58:30.670964 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:30.700638 kubelet[2806]: I1008 19:58:30.700574 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l9pqg" podStartSLOduration=24.700521935 podStartE2EDuration="24.700521935s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:30.685404552 +0000 UTC m=+36.622168018" watchObservedRunningTime="2024-10-08 19:58:30.700521935 +0000 UTC m=+36.637285401" Oct 8 19:58:30.712050 kubelet[2806]: I1008 19:58:30.709622 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xcqrl" podStartSLOduration=24.709572944 podStartE2EDuration="24.709572944s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:30.701013666 +0000 UTC m=+36.637777132" watchObservedRunningTime="2024-10-08 19:58:30.709572944 +0000 UTC m=+36.646336400" Oct 8 19:58:31.672782 kubelet[2806]: E1008 19:58:31.672726 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:31.673335 kubelet[2806]: E1008 19:58:31.672796 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:32.674238 kubelet[2806]: E1008 19:58:32.674203 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:32.674771 kubelet[2806]: E1008 19:58:32.674279 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:34.686560 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:43434.service - OpenSSH per-connection server daemon (10.0.0.1:43434). Oct 8 19:58:34.722215 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 43434 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:34.724400 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:34.729980 systemd-logind[1573]: New session 10 of user core. Oct 8 19:58:34.738761 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:58:35.090217 sshd[4212]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:35.094979 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:43434.service: Deactivated successfully. Oct 8 19:58:35.097807 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:58:35.098588 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:58:35.099611 systemd-logind[1573]: Removed session 10. Oct 8 19:58:39.888608 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:43438.service - OpenSSH per-connection server daemon (10.0.0.1:43438). Oct 8 19:58:39.924511 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 43438 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:39.926338 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:39.930839 systemd-logind[1573]: New session 11 of user core. Oct 8 19:58:39.938644 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:58:40.056187 sshd[4230]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:40.060050 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:43438.service: Deactivated successfully. Oct 8 19:58:40.062463 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:58:40.062512 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:58:40.063880 systemd-logind[1573]: Removed session 11. Oct 8 19:58:45.070560 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:37060.service - OpenSSH per-connection server daemon (10.0.0.1:37060). Oct 8 19:58:45.104877 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 37060 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:45.106558 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:45.110632 systemd-logind[1573]: New session 12 of user core. Oct 8 19:58:45.123712 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:58:45.243979 sshd[4246]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:45.252564 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:37074.service - OpenSSH per-connection server daemon (10.0.0.1:37074). Oct 8 19:58:45.253469 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:37060.service: Deactivated successfully. Oct 8 19:58:45.257538 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:58:45.258510 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:58:45.259743 systemd-logind[1573]: Removed session 12. Oct 8 19:58:45.285856 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 37074 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:45.287683 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:45.292429 systemd-logind[1573]: New session 13 of user core. Oct 8 19:58:45.302694 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:58:45.456531 sshd[4259]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:45.465646 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:37080.service - OpenSSH per-connection server daemon (10.0.0.1:37080). Oct 8 19:58:45.466160 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:37074.service: Deactivated successfully. Oct 8 19:58:45.469986 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:58:45.470835 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:58:45.474234 systemd-logind[1573]: Removed session 13. Oct 8 19:58:45.504826 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 37080 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:45.506909 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:45.511420 systemd-logind[1573]: New session 14 of user core. Oct 8 19:58:45.522901 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:58:45.645977 sshd[4272]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:45.651003 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:37080.service: Deactivated successfully. Oct 8 19:58:45.655121 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:58:45.656415 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:58:45.657643 systemd-logind[1573]: Removed session 14. Oct 8 19:58:50.656615 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:56264.service - OpenSSH per-connection server daemon (10.0.0.1:56264). Oct 8 19:58:50.688590 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 56264 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:50.690501 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:50.694766 systemd-logind[1573]: New session 15 of user core. Oct 8 19:58:50.706521 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:58:50.818291 sshd[4290]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:50.822926 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:56264.service: Deactivated successfully. Oct 8 19:58:50.825478 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:58:50.825545 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:58:50.826694 systemd-logind[1573]: Removed session 15. Oct 8 19:58:55.828539 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:56276.service - OpenSSH per-connection server daemon (10.0.0.1:56276). Oct 8 19:58:55.859053 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 56276 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:58:55.860751 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:55.865004 systemd-logind[1573]: New session 16 of user core. Oct 8 19:58:55.876646 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:58:55.991089 sshd[4307]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:55.995311 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:56276.service: Deactivated successfully. Oct 8 19:58:55.998196 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:58:55.999074 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:58:56.000143 systemd-logind[1573]: Removed session 16. Oct 8 19:59:01.009668 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:58584.service - OpenSSH per-connection server daemon (10.0.0.1:58584). Oct 8 19:59:01.041452 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 58584 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:01.043514 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:01.047946 systemd-logind[1573]: New session 17 of user core. Oct 8 19:59:01.055782 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:59:01.178042 sshd[4322]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:01.191580 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:58590.service - OpenSSH per-connection server daemon (10.0.0.1:58590). Oct 8 19:59:01.192156 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:58584.service: Deactivated successfully. Oct 8 19:59:01.195769 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:59:01.198206 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:59:01.199426 systemd-logind[1573]: Removed session 17. Oct 8 19:59:01.229246 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 58590 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:01.231012 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:01.235476 systemd-logind[1573]: New session 18 of user core. Oct 8 19:59:01.246604 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:59:01.657133 sshd[4335]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:01.665641 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:58596.service - OpenSSH per-connection server daemon (10.0.0.1:58596). Oct 8 19:59:01.666676 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:58590.service: Deactivated successfully. Oct 8 19:59:01.670197 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:59:01.670489 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:59:01.672009 systemd-logind[1573]: Removed session 18. Oct 8 19:59:01.698870 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 58596 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:01.700633 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:01.705295 systemd-logind[1573]: New session 19 of user core. Oct 8 19:59:01.715626 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:59:03.190889 sshd[4348]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:03.198514 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:58612.service - OpenSSH per-connection server daemon (10.0.0.1:58612). Oct 8 19:59:03.201091 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:58596.service: Deactivated successfully. Oct 8 19:59:03.206027 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:59:03.206990 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:59:03.208501 systemd-logind[1573]: Removed session 19. Oct 8 19:59:03.241017 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 58612 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:03.243080 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:03.247645 systemd-logind[1573]: New session 20 of user core. Oct 8 19:59:03.253550 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:59:03.812316 sshd[4372]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:03.820588 systemd[1]: Started sshd@20-10.0.0.70:22-10.0.0.1:58614.service - OpenSSH per-connection server daemon (10.0.0.1:58614). Oct 8 19:59:03.821487 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:58612.service: Deactivated successfully. Oct 8 19:59:03.826347 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:59:03.827894 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:59:03.828985 systemd-logind[1573]: Removed session 20. Oct 8 19:59:03.852467 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 58614 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:03.854089 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:03.858658 systemd-logind[1573]: New session 21 of user core. Oct 8 19:59:03.870540 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:59:04.042712 sshd[4387]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:04.047402 systemd[1]: sshd@20-10.0.0.70:22-10.0.0.1:58614.service: Deactivated successfully. Oct 8 19:59:04.049859 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:59:04.050780 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:59:04.051897 systemd-logind[1573]: Removed session 21. Oct 8 19:59:07.171175 kubelet[2806]: E1008 19:59:07.170959 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:09.057667 systemd[1]: Started sshd@21-10.0.0.70:22-10.0.0.1:58626.service - OpenSSH per-connection server daemon (10.0.0.1:58626). Oct 8 19:59:09.091293 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 58626 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:09.093058 sshd[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:09.097860 systemd-logind[1573]: New session 22 of user core. Oct 8 19:59:09.106641 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:59:09.221104 sshd[4407]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:09.225578 systemd[1]: sshd@21-10.0.0.70:22-10.0.0.1:58626.service: Deactivated successfully. Oct 8 19:59:09.227694 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:59:09.227852 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:59:09.229122 systemd-logind[1573]: Removed session 22. Oct 8 19:59:11.171026 kubelet[2806]: E1008 19:59:11.170970 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:12.171161 kubelet[2806]: E1008 19:59:12.171095 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:14.233644 systemd[1]: Started sshd@22-10.0.0.70:22-10.0.0.1:41722.service - OpenSSH per-connection server daemon (10.0.0.1:41722). Oct 8 19:59:14.264140 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 41722 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:14.265885 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:14.270190 systemd-logind[1573]: New session 23 of user core. Oct 8 19:59:14.279551 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:59:14.390674 sshd[4425]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:14.395739 systemd[1]: sshd@22-10.0.0.70:22-10.0.0.1:41722.service: Deactivated successfully. Oct 8 19:59:14.398209 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:59:14.398307 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:59:14.399321 systemd-logind[1573]: Removed session 23. Oct 8 19:59:19.400484 systemd[1]: Started sshd@23-10.0.0.70:22-10.0.0.1:41726.service - OpenSSH per-connection server daemon (10.0.0.1:41726). Oct 8 19:59:19.430980 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 41726 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:19.432487 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:19.436591 systemd-logind[1573]: New session 24 of user core. Oct 8 19:59:19.445562 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:59:19.562082 sshd[4440]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:19.566509 systemd[1]: sshd@23-10.0.0.70:22-10.0.0.1:41726.service: Deactivated successfully. Oct 8 19:59:19.569895 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:59:19.570725 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:59:19.571746 systemd-logind[1573]: Removed session 24. Oct 8 19:59:22.171324 kubelet[2806]: E1008 19:59:22.171230 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:24.587722 systemd[1]: Started sshd@24-10.0.0.70:22-10.0.0.1:48216.service - OpenSSH per-connection server daemon (10.0.0.1:48216). Oct 8 19:59:24.617599 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 48216 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:24.619010 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:24.622838 systemd-logind[1573]: New session 25 of user core. Oct 8 19:59:24.637681 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:59:24.746489 sshd[4455]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:24.756514 systemd[1]: Started sshd@25-10.0.0.70:22-10.0.0.1:48220.service - OpenSSH per-connection server daemon (10.0.0.1:48220). Oct 8 19:59:24.757216 systemd[1]: sshd@24-10.0.0.70:22-10.0.0.1:48216.service: Deactivated successfully. Oct 8 19:59:24.759736 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:59:24.761969 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:59:24.763252 systemd-logind[1573]: Removed session 25. Oct 8 19:59:24.789923 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 48220 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:24.791583 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:24.795874 systemd-logind[1573]: New session 26 of user core. Oct 8 19:59:24.803506 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:59:25.170835 kubelet[2806]: E1008 19:59:25.170785 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:26.256124 containerd[1592]: time="2024-10-08T19:59:26.256061488Z" level=info msg="StopContainer for \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\" with timeout 30 (s)" Oct 8 19:59:26.256808 containerd[1592]: time="2024-10-08T19:59:26.256525864Z" level=info msg="Stop container \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\" with signal terminated" Oct 8 19:59:26.303022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48-rootfs.mount: Deactivated successfully. Oct 8 19:59:26.304900 containerd[1592]: time="2024-10-08T19:59:26.304765706Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:59:26.307000 containerd[1592]: time="2024-10-08T19:59:26.306972940Z" level=info msg="StopContainer for \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\" with timeout 2 (s)" Oct 8 19:59:26.307452 containerd[1592]: time="2024-10-08T19:59:26.307431925Z" level=info msg="Stop container \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\" with signal terminated" Oct 8 19:59:26.310682 containerd[1592]: time="2024-10-08T19:59:26.310614550Z" level=info msg="shim disconnected" id=26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48 namespace=k8s.io Oct 8 19:59:26.310682 containerd[1592]: time="2024-10-08T19:59:26.310672301Z" level=warning msg="cleaning up after shim disconnected" id=26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48 namespace=k8s.io Oct 8 19:59:26.310682 containerd[1592]: time="2024-10-08T19:59:26.310680938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:26.316429 systemd-networkd[1246]: lxc_health: Link DOWN Oct 8 19:59:26.316440 systemd-networkd[1246]: lxc_health: Lost carrier Oct 8 19:59:26.339154 containerd[1592]: time="2024-10-08T19:59:26.339091900Z" level=info msg="StopContainer for \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\" returns successfully" Oct 8 19:59:26.339970 containerd[1592]: time="2024-10-08T19:59:26.339909716Z" level=info msg="StopPodSandbox for \"fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d\"" Oct 8 19:59:26.339970 containerd[1592]: time="2024-10-08T19:59:26.339971695Z" level=info msg="Container to stop \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:26.342772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d-shm.mount: Deactivated successfully. Oct 8 19:59:26.370486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba-rootfs.mount: Deactivated successfully. Oct 8 19:59:26.374072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d-rootfs.mount: Deactivated successfully. Oct 8 19:59:26.502782 containerd[1592]: time="2024-10-08T19:59:26.502699675Z" level=info msg="shim disconnected" id=0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba namespace=k8s.io Oct 8 19:59:26.502782 containerd[1592]: time="2024-10-08T19:59:26.502776673Z" level=warning msg="cleaning up after shim disconnected" id=0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba namespace=k8s.io Oct 8 19:59:26.502782 containerd[1592]: time="2024-10-08T19:59:26.502788826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:26.533053 containerd[1592]: time="2024-10-08T19:59:26.531802720Z" level=info msg="shim disconnected" id=fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d namespace=k8s.io Oct 8 19:59:26.533053 containerd[1592]: time="2024-10-08T19:59:26.531878446Z" level=warning msg="cleaning up after shim disconnected" id=fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d namespace=k8s.io Oct 8 19:59:26.533053 containerd[1592]: time="2024-10-08T19:59:26.531892013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:26.546981 containerd[1592]: time="2024-10-08T19:59:26.546930540Z" level=info msg="TearDown network for sandbox \"fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d\" successfully" Oct 8 19:59:26.546981 containerd[1592]: time="2024-10-08T19:59:26.546976217Z" level=info msg="StopPodSandbox for \"fc4ba7822ff75cc288332409b8e8d2270cfde84123a2043f58537805fa81a14d\" returns successfully" Oct 8 19:59:26.549012 containerd[1592]: time="2024-10-08T19:59:26.548971343Z" level=info msg="StopContainer for \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\" returns successfully" Oct 8 19:59:26.549377 containerd[1592]: time="2024-10-08T19:59:26.549323763Z" level=info msg="StopPodSandbox for \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\"" Oct 8 19:59:26.549377 containerd[1592]: time="2024-10-08T19:59:26.549353109Z" level=info msg="Container to stop \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:26.549377 containerd[1592]: time="2024-10-08T19:59:26.549366064Z" level=info msg="Container to stop \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:26.549377 containerd[1592]: time="2024-10-08T19:59:26.549374730Z" level=info msg="Container to stop \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:26.549548 containerd[1592]: time="2024-10-08T19:59:26.549384279Z" level=info msg="Container to stop \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:26.549548 containerd[1592]: time="2024-10-08T19:59:26.549393066Z" level=info msg="Container to stop \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:26.584189 containerd[1592]: time="2024-10-08T19:59:26.583979872Z" level=info msg="shim disconnected" id=cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3 namespace=k8s.io Oct 8 19:59:26.584189 containerd[1592]: time="2024-10-08T19:59:26.584045569Z" level=warning msg="cleaning up after shim disconnected" id=cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3 namespace=k8s.io Oct 8 19:59:26.584189 containerd[1592]: time="2024-10-08T19:59:26.584056490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:26.599861 containerd[1592]: time="2024-10-08T19:59:26.599811046Z" level=info msg="TearDown network for sandbox \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" successfully" Oct 8 19:59:26.599861 containerd[1592]: time="2024-10-08T19:59:26.599848970Z" level=info msg="StopPodSandbox for \"cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3\" returns successfully" Oct 8 19:59:26.683371 kubelet[2806]: I1008 19:59:26.683325 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffaec8a5-b623-4d7d-a01f-77cccbad00fd-cilium-config-path\") pod \"ffaec8a5-b623-4d7d-a01f-77cccbad00fd\" (UID: \"ffaec8a5-b623-4d7d-a01f-77cccbad00fd\") " Oct 8 19:59:26.683886 kubelet[2806]: I1008 19:59:26.683399 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjq5r\" (UniqueName: \"kubernetes.io/projected/ffaec8a5-b623-4d7d-a01f-77cccbad00fd-kube-api-access-fjq5r\") pod \"ffaec8a5-b623-4d7d-a01f-77cccbad00fd\" (UID: \"ffaec8a5-b623-4d7d-a01f-77cccbad00fd\") " Oct 8 19:59:26.686948 kubelet[2806]: I1008 19:59:26.686901 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffaec8a5-b623-4d7d-a01f-77cccbad00fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ffaec8a5-b623-4d7d-a01f-77cccbad00fd" (UID: "ffaec8a5-b623-4d7d-a01f-77cccbad00fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:59:26.687701 kubelet[2806]: I1008 19:59:26.687610 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaec8a5-b623-4d7d-a01f-77cccbad00fd-kube-api-access-fjq5r" (OuterVolumeSpecName: "kube-api-access-fjq5r") pod "ffaec8a5-b623-4d7d-a01f-77cccbad00fd" (UID: "ffaec8a5-b623-4d7d-a01f-77cccbad00fd"). InnerVolumeSpecName "kube-api-access-fjq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:59:26.783845 kubelet[2806]: I1008 19:59:26.783697 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-host-proc-sys-net\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.783845 kubelet[2806]: I1008 19:59:26.783760 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-hubble-tls\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.783845 kubelet[2806]: I1008 19:59:26.783792 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-host-proc-sys-kernel\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.783845 kubelet[2806]: I1008 19:59:26.783816 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-bpf-maps\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.783845 kubelet[2806]: I1008 19:59:26.783837 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-config-path\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.783845 kubelet[2806]: I1008 19:59:26.783854 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-run\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784094 kubelet[2806]: I1008 19:59:26.783847 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.784094 kubelet[2806]: I1008 19:59:26.783918 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.784094 kubelet[2806]: I1008 19:59:26.783878 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-etc-cni-netd\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784094 kubelet[2806]: I1008 19:59:26.783951 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.784094 kubelet[2806]: I1008 19:59:26.783974 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-hostproc\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784216 kubelet[2806]: I1008 19:59:26.784012 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xqf2\" (UniqueName: \"kubernetes.io/projected/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-kube-api-access-9xqf2\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784216 kubelet[2806]: I1008 19:59:26.784043 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-lib-modules\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784216 kubelet[2806]: I1008 19:59:26.784067 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-cgroup\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784216 kubelet[2806]: I1008 19:59:26.784094 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-clustermesh-secrets\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784216 kubelet[2806]: I1008 19:59:26.784120 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cni-path\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784216 kubelet[2806]: I1008 19:59:26.784142 2806 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-xtables-lock\") pod \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\" (UID: \"e5baa5af-a53c-484c-ae9d-a661f7bf40eb\") " Oct 8 19:59:26.784381 kubelet[2806]: I1008 19:59:26.784189 2806 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.784381 kubelet[2806]: I1008 19:59:26.784205 2806 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.784381 kubelet[2806]: I1008 19:59:26.784220 2806 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffaec8a5-b623-4d7d-a01f-77cccbad00fd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.784381 kubelet[2806]: I1008 19:59:26.784250 2806 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fjq5r\" (UniqueName: \"kubernetes.io/projected/ffaec8a5-b623-4d7d-a01f-77cccbad00fd-kube-api-access-fjq5r\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.784381 kubelet[2806]: I1008 19:59:26.784314 2806 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.784381 kubelet[2806]: I1008 19:59:26.784342 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.785278 kubelet[2806]: I1008 19:59:26.784386 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.785278 kubelet[2806]: I1008 19:59:26.784429 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.785278 kubelet[2806]: I1008 19:59:26.783975 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.785278 kubelet[2806]: I1008 19:59:26.784733 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-hostproc" (OuterVolumeSpecName: "hostproc") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.785278 kubelet[2806]: I1008 19:59:26.784773 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.787513 kubelet[2806]: I1008 19:59:26.787424 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:59:26.787692 kubelet[2806]: I1008 19:59:26.787666 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cni-path" (OuterVolumeSpecName: "cni-path") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:26.788365 kubelet[2806]: I1008 19:59:26.788144 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-kube-api-access-9xqf2" (OuterVolumeSpecName: "kube-api-access-9xqf2") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "kube-api-access-9xqf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:59:26.788365 kubelet[2806]: I1008 19:59:26.788241 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:59:26.788365 kubelet[2806]: I1008 19:59:26.788310 2806 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e5baa5af-a53c-484c-ae9d-a661f7bf40eb" (UID: "e5baa5af-a53c-484c-ae9d-a661f7bf40eb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:59:26.792331 kubelet[2806]: I1008 19:59:26.792303 2806 scope.go:117] "RemoveContainer" containerID="26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48" Oct 8 19:59:26.793819 containerd[1592]: time="2024-10-08T19:59:26.793475245Z" level=info msg="RemoveContainer for \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\"" Oct 8 19:59:26.802870 containerd[1592]: time="2024-10-08T19:59:26.802823413Z" level=info msg="RemoveContainer for \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\" returns successfully" Oct 8 19:59:26.803090 kubelet[2806]: I1008 19:59:26.803066 2806 scope.go:117] "RemoveContainer" containerID="26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48" Oct 8 19:59:26.803866 containerd[1592]: time="2024-10-08T19:59:26.803822420Z" level=error msg="ContainerStatus for \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\": not found" Oct 8 19:59:26.807433 kubelet[2806]: E1008 19:59:26.806415 2806 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\": not found" containerID="26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48" Oct 8 19:59:26.807433 kubelet[2806]: I1008 19:59:26.806554 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48"} err="failed to get container status \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\": rpc error: code = NotFound desc = an error occurred when try to find container \"26f48a2590b1d5bbf9456de957977a6c7f301571e806b695eabc5c33c50a1e48\": not found" Oct 8 19:59:26.807433 kubelet[2806]: I1008 19:59:26.806577 2806 scope.go:117] "RemoveContainer" containerID="0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba" Oct 8 19:59:26.808021 containerd[1592]: time="2024-10-08T19:59:26.807987187Z" level=info msg="RemoveContainer for \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\"" Oct 8 19:59:26.812465 containerd[1592]: time="2024-10-08T19:59:26.812432946Z" level=info msg="RemoveContainer for \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\" returns successfully" Oct 8 19:59:26.812695 kubelet[2806]: I1008 19:59:26.812640 2806 scope.go:117] "RemoveContainer" containerID="667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f" Oct 8 19:59:26.813942 containerd[1592]: time="2024-10-08T19:59:26.813905704Z" level=info msg="RemoveContainer for \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\"" Oct 8 19:59:26.818143 containerd[1592]: time="2024-10-08T19:59:26.818112794Z" level=info msg="RemoveContainer for \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\" returns successfully" Oct 8 19:59:26.818257 kubelet[2806]: I1008 19:59:26.818233 2806 scope.go:117] "RemoveContainer" containerID="635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3" Oct 8 19:59:26.819297 containerd[1592]: time="2024-10-08T19:59:26.819225699Z" level=info msg="RemoveContainer for \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\"" Oct 8 19:59:26.822346 containerd[1592]: time="2024-10-08T19:59:26.822316367Z" level=info msg="RemoveContainer for \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\" returns successfully" Oct 8 19:59:26.822473 kubelet[2806]: I1008 19:59:26.822440 2806 scope.go:117] "RemoveContainer" containerID="4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0" Oct 8 19:59:26.823232 containerd[1592]: time="2024-10-08T19:59:26.823193918Z" level=info msg="RemoveContainer for \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\"" Oct 8 19:59:26.837046 containerd[1592]: time="2024-10-08T19:59:26.836992665Z" level=info msg="RemoveContainer for \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\" returns successfully" Oct 8 19:59:26.837890 kubelet[2806]: I1008 19:59:26.837848 2806 scope.go:117] "RemoveContainer" containerID="fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220" Oct 8 19:59:26.848390 containerd[1592]: time="2024-10-08T19:59:26.848355998Z" level=info msg="RemoveContainer for \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\"" Oct 8 19:59:26.851829 containerd[1592]: time="2024-10-08T19:59:26.851790759Z" level=info msg="RemoveContainer for \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\" returns successfully" Oct 8 19:59:26.851965 kubelet[2806]: I1008 19:59:26.851942 2806 scope.go:117] "RemoveContainer" containerID="0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba" Oct 8 19:59:26.852126 containerd[1592]: time="2024-10-08T19:59:26.852090826Z" level=error msg="ContainerStatus for \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\": not found" Oct 8 19:59:26.852252 kubelet[2806]: E1008 19:59:26.852226 2806 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\": not found" containerID="0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba" Oct 8 19:59:26.852302 kubelet[2806]: I1008 19:59:26.852292 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba"} err="failed to get container status \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e1ff2bcd9a350688e7090c5b3907d48c403d9c97c0b3a90187f097f41c69cba\": not found" Oct 8 19:59:26.852329 kubelet[2806]: I1008 19:59:26.852308 2806 scope.go:117] "RemoveContainer" containerID="667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f" Oct 8 19:59:26.852455 containerd[1592]: time="2024-10-08T19:59:26.852424170Z" level=error msg="ContainerStatus for \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\": not found" Oct 8 19:59:26.852555 kubelet[2806]: E1008 19:59:26.852535 2806 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\": not found" containerID="667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f" Oct 8 19:59:26.852595 kubelet[2806]: I1008 19:59:26.852574 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f"} err="failed to get container status \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\": rpc error: code = NotFound desc = an error occurred when try to find container \"667c2056f741118a2c9321c693308bd1f5776246a831d9cf678bc962f04f462f\": not found" Oct 8 19:59:26.852595 kubelet[2806]: I1008 19:59:26.852586 2806 scope.go:117] "RemoveContainer" containerID="635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3" Oct 8 19:59:26.852791 containerd[1592]: time="2024-10-08T19:59:26.852738715Z" level=error msg="ContainerStatus for \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\": not found" Oct 8 19:59:26.852865 kubelet[2806]: E1008 19:59:26.852850 2806 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\": not found" containerID="635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3" Oct 8 19:59:26.852893 kubelet[2806]: I1008 19:59:26.852875 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3"} err="failed to get container status \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"635096e03c4874443199e1ff0b798bd6c8489dbd832b0289c87cfb05e4f246f3\": not found" Oct 8 19:59:26.852893 kubelet[2806]: I1008 19:59:26.852888 2806 scope.go:117] "RemoveContainer" containerID="4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0" Oct 8 19:59:26.853051 containerd[1592]: time="2024-10-08T19:59:26.853022633Z" level=error msg="ContainerStatus for \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\": not found" Oct 8 19:59:26.853212 kubelet[2806]: E1008 19:59:26.853190 2806 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\": not found" containerID="4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0" Oct 8 19:59:26.853251 kubelet[2806]: I1008 19:59:26.853234 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0"} err="failed to get container status \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4acaa75509fd193433272580b74a2aecd3219e81673088ce2e42c75b5f1515a0\": not found" Oct 8 19:59:26.853251 kubelet[2806]: I1008 19:59:26.853250 2806 scope.go:117] "RemoveContainer" containerID="fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220" Oct 8 19:59:26.853478 containerd[1592]: time="2024-10-08T19:59:26.853449315Z" level=error msg="ContainerStatus for \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\": not found" Oct 8 19:59:26.853568 kubelet[2806]: E1008 19:59:26.853553 2806 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\": not found" containerID="fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220" Oct 8 19:59:26.853597 kubelet[2806]: I1008 19:59:26.853575 2806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220"} err="failed to get container status \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdf66ccedd88137df6d82f260f0c614753b2ab89599441af9b51f98402f7e220\": not found" Oct 8 19:59:26.884829 kubelet[2806]: I1008 19:59:26.884792 2806 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.884829 kubelet[2806]: I1008 19:59:26.884816 2806 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.884829 kubelet[2806]: I1008 19:59:26.884826 2806 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.884829 kubelet[2806]: I1008 19:59:26.884836 2806 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.884829 kubelet[2806]: I1008 19:59:26.884845 2806 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.885065 kubelet[2806]: I1008 19:59:26.884855 2806 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.885065 kubelet[2806]: I1008 19:59:26.884864 2806 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.885065 kubelet[2806]: I1008 19:59:26.884874 2806 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.885065 kubelet[2806]: I1008 19:59:26.884882 2806 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.885065 kubelet[2806]: I1008 19:59:26.884892 2806 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9xqf2\" (UniqueName: \"kubernetes.io/projected/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-kube-api-access-9xqf2\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:26.885065 kubelet[2806]: I1008 19:59:26.884901 2806 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5baa5af-a53c-484c-ae9d-a661f7bf40eb-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:27.277967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3-rootfs.mount: Deactivated successfully. Oct 8 19:59:27.278181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf247eab2b41c65ecb86d169bfa136f449f758138ffc618887a9596acefa30d3-shm.mount: Deactivated successfully. Oct 8 19:59:27.278343 systemd[1]: var-lib-kubelet-pods-ffaec8a5\x2db623\x2d4d7d\x2da01f\x2d77cccbad00fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjq5r.mount: Deactivated successfully. Oct 8 19:59:27.278491 systemd[1]: var-lib-kubelet-pods-e5baa5af\x2da53c\x2d484c\x2dae9d\x2da661f7bf40eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9xqf2.mount: Deactivated successfully. Oct 8 19:59:27.278637 systemd[1]: var-lib-kubelet-pods-e5baa5af\x2da53c\x2d484c\x2dae9d\x2da661f7bf40eb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 19:59:27.278786 systemd[1]: var-lib-kubelet-pods-e5baa5af\x2da53c\x2d484c\x2dae9d\x2da661f7bf40eb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 19:59:28.173496 kubelet[2806]: I1008 19:59:28.173452 2806 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e5baa5af-a53c-484c-ae9d-a661f7bf40eb" path="/var/lib/kubelet/pods/e5baa5af-a53c-484c-ae9d-a661f7bf40eb/volumes" Oct 8 19:59:28.174428 kubelet[2806]: I1008 19:59:28.174409 2806 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ffaec8a5-b623-4d7d-a01f-77cccbad00fd" path="/var/lib/kubelet/pods/ffaec8a5-b623-4d7d-a01f-77cccbad00fd/volumes" Oct 8 19:59:28.212941 sshd[4468]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:28.220636 systemd[1]: Started sshd@26-10.0.0.70:22-10.0.0.1:48226.service - OpenSSH per-connection server daemon (10.0.0.1:48226). Oct 8 19:59:28.221358 systemd[1]: sshd@25-10.0.0.70:22-10.0.0.1:48220.service: Deactivated successfully. Oct 8 19:59:28.225118 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:59:28.228138 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:59:28.230742 systemd-logind[1573]: Removed session 26. Oct 8 19:59:28.263076 sshd[4633]: Accepted publickey for core from 10.0.0.1 port 48226 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:28.265321 sshd[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:28.270850 systemd-logind[1573]: New session 27 of user core. Oct 8 19:59:28.280706 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:59:29.248406 kubelet[2806]: E1008 19:59:29.248342 2806 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:59:29.429395 sshd[4633]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:29.439592 systemd[1]: Started sshd@27-10.0.0.70:22-10.0.0.1:48228.service - OpenSSH per-connection server daemon (10.0.0.1:48228). Oct 8 19:59:29.440207 systemd[1]: sshd@26-10.0.0.70:22-10.0.0.1:48226.service: Deactivated successfully. Oct 8 19:59:29.442857 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:59:29.445123 systemd-logind[1573]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:59:29.446436 systemd-logind[1573]: Removed session 27. Oct 8 19:59:29.472668 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 48228 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:29.474654 sshd[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:29.479422 systemd-logind[1573]: New session 28 of user core. Oct 8 19:59:29.494623 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 19:59:29.548239 sshd[4647]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:29.558144 systemd[1]: Started sshd@28-10.0.0.70:22-10.0.0.1:48234.service - OpenSSH per-connection server daemon (10.0.0.1:48234). Oct 8 19:59:29.558742 systemd[1]: sshd@27-10.0.0.70:22-10.0.0.1:48228.service: Deactivated successfully. Oct 8 19:59:29.560952 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 19:59:29.563013 systemd-logind[1573]: Session 28 logged out. Waiting for processes to exit. Oct 8 19:59:29.564430 systemd-logind[1573]: Removed session 28. Oct 8 19:59:29.590602 sshd[4658]: Accepted publickey for core from 10.0.0.1 port 48234 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:59:29.592561 sshd[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:29.597693 systemd-logind[1573]: New session 29 of user core. Oct 8 19:59:29.604690 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 19:59:29.697029 kubelet[2806]: I1008 19:59:29.696968 2806 topology_manager.go:215] "Topology Admit Handler" podUID="86b77fec-dc7b-4527-90aa-e02b9152eac9" podNamespace="kube-system" podName="cilium-hkqzg" Oct 8 19:59:29.697203 kubelet[2806]: E1008 19:59:29.697058 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5baa5af-a53c-484c-ae9d-a661f7bf40eb" containerName="apply-sysctl-overwrites" Oct 8 19:59:29.697203 kubelet[2806]: E1008 19:59:29.697071 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ffaec8a5-b623-4d7d-a01f-77cccbad00fd" containerName="cilium-operator" Oct 8 19:59:29.697203 kubelet[2806]: E1008 19:59:29.697081 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5baa5af-a53c-484c-ae9d-a661f7bf40eb" containerName="mount-bpf-fs" Oct 8 19:59:29.697203 kubelet[2806]: E1008 19:59:29.697091 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5baa5af-a53c-484c-ae9d-a661f7bf40eb" containerName="mount-cgroup" Oct 8 19:59:29.697203 kubelet[2806]: E1008 19:59:29.697101 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5baa5af-a53c-484c-ae9d-a661f7bf40eb" containerName="clean-cilium-state" Oct 8 19:59:29.697203 kubelet[2806]: E1008 19:59:29.697111 2806 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5baa5af-a53c-484c-ae9d-a661f7bf40eb" containerName="cilium-agent" Oct 8 19:59:29.697203 kubelet[2806]: I1008 19:59:29.697140 2806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaec8a5-b623-4d7d-a01f-77cccbad00fd" containerName="cilium-operator" Oct 8 19:59:29.697203 kubelet[2806]: I1008 19:59:29.697149 2806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5baa5af-a53c-484c-ae9d-a661f7bf40eb" containerName="cilium-agent" Oct 8 19:59:29.801021 kubelet[2806]: I1008 19:59:29.800661 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-host-proc-sys-kernel\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801021 kubelet[2806]: I1008 19:59:29.800755 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-cilium-run\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801021 kubelet[2806]: I1008 19:59:29.800802 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-bpf-maps\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801021 kubelet[2806]: I1008 19:59:29.800825 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86b77fec-dc7b-4527-90aa-e02b9152eac9-clustermesh-secrets\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801021 kubelet[2806]: I1008 19:59:29.800896 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-host-proc-sys-net\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801021 kubelet[2806]: I1008 19:59:29.800917 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-cilium-cgroup\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801245 kubelet[2806]: I1008 19:59:29.800939 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-hostproc\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801245 kubelet[2806]: I1008 19:59:29.801047 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/86b77fec-dc7b-4527-90aa-e02b9152eac9-cilium-ipsec-secrets\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801245 kubelet[2806]: I1008 19:59:29.801095 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq7x5\" (UniqueName: \"kubernetes.io/projected/86b77fec-dc7b-4527-90aa-e02b9152eac9-kube-api-access-tq7x5\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801245 kubelet[2806]: I1008 19:59:29.801129 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-xtables-lock\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801245 kubelet[2806]: I1008 19:59:29.801157 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86b77fec-dc7b-4527-90aa-e02b9152eac9-hubble-tls\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801245 kubelet[2806]: I1008 19:59:29.801184 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-cni-path\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801393 kubelet[2806]: I1008 19:59:29.801219 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-etc-cni-netd\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801393 kubelet[2806]: I1008 19:59:29.801238 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86b77fec-dc7b-4527-90aa-e02b9152eac9-lib-modules\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:29.801393 kubelet[2806]: I1008 19:59:29.801256 2806 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86b77fec-dc7b-4527-90aa-e02b9152eac9-cilium-config-path\") pod \"cilium-hkqzg\" (UID: \"86b77fec-dc7b-4527-90aa-e02b9152eac9\") " pod="kube-system/cilium-hkqzg" Oct 8 19:59:30.002957 kubelet[2806]: E1008 19:59:30.002878 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:30.003649 containerd[1592]: time="2024-10-08T19:59:30.003541277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hkqzg,Uid:86b77fec-dc7b-4527-90aa-e02b9152eac9,Namespace:kube-system,Attempt:0,}" Oct 8 19:59:30.027674 containerd[1592]: time="2024-10-08T19:59:30.027528667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:59:30.027674 containerd[1592]: time="2024-10-08T19:59:30.027652065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:59:30.027926 containerd[1592]: time="2024-10-08T19:59:30.027678296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:30.027926 containerd[1592]: time="2024-10-08T19:59:30.027793438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:30.068776 containerd[1592]: time="2024-10-08T19:59:30.068644482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hkqzg,Uid:86b77fec-dc7b-4527-90aa-e02b9152eac9,Namespace:kube-system,Attempt:0,} returns sandbox id \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\"" Oct 8 19:59:30.069332 kubelet[2806]: E1008 19:59:30.069303 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:30.071399 containerd[1592]: time="2024-10-08T19:59:30.071364478Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:59:30.088624 containerd[1592]: time="2024-10-08T19:59:30.088557629Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8a41b9d5f4da774c18de728bc1273138317be8c123356c66f827a950bbed307f\"" Oct 8 19:59:30.089401 containerd[1592]: time="2024-10-08T19:59:30.089362615Z" level=info msg="StartContainer for \"8a41b9d5f4da774c18de728bc1273138317be8c123356c66f827a950bbed307f\"" Oct 8 19:59:30.154538 containerd[1592]: time="2024-10-08T19:59:30.154479857Z" level=info msg="StartContainer for \"8a41b9d5f4da774c18de728bc1273138317be8c123356c66f827a950bbed307f\" returns successfully" Oct 8 19:59:30.206672 containerd[1592]: time="2024-10-08T19:59:30.206597320Z" level=info msg="shim disconnected" id=8a41b9d5f4da774c18de728bc1273138317be8c123356c66f827a950bbed307f namespace=k8s.io Oct 8 19:59:30.206672 containerd[1592]: time="2024-10-08T19:59:30.206661774Z" level=warning msg="cleaning up after shim disconnected" id=8a41b9d5f4da774c18de728bc1273138317be8c123356c66f827a950bbed307f namespace=k8s.io Oct 8 19:59:30.206672 containerd[1592]: time="2024-10-08T19:59:30.206674078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:30.805619 kubelet[2806]: E1008 19:59:30.805580 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:30.807743 containerd[1592]: time="2024-10-08T19:59:30.807698721Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:59:30.821041 containerd[1592]: time="2024-10-08T19:59:30.820988970Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5d3d2eb4e0deed5fcb096bbc4e42959b550537f9ae35d3c2c6e815bcd100288\"" Oct 8 19:59:30.821579 containerd[1592]: time="2024-10-08T19:59:30.821550896Z" level=info msg="StartContainer for \"d5d3d2eb4e0deed5fcb096bbc4e42959b550537f9ae35d3c2c6e815bcd100288\"" Oct 8 19:59:30.884006 containerd[1592]: time="2024-10-08T19:59:30.883960406Z" level=info msg="StartContainer for \"d5d3d2eb4e0deed5fcb096bbc4e42959b550537f9ae35d3c2c6e815bcd100288\" returns successfully" Oct 8 19:59:30.915908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5d3d2eb4e0deed5fcb096bbc4e42959b550537f9ae35d3c2c6e815bcd100288-rootfs.mount: Deactivated successfully. Oct 8 19:59:30.921762 containerd[1592]: time="2024-10-08T19:59:30.921661965Z" level=info msg="shim disconnected" id=d5d3d2eb4e0deed5fcb096bbc4e42959b550537f9ae35d3c2c6e815bcd100288 namespace=k8s.io Oct 8 19:59:30.921762 containerd[1592]: time="2024-10-08T19:59:30.921729485Z" level=warning msg="cleaning up after shim disconnected" id=d5d3d2eb4e0deed5fcb096bbc4e42959b550537f9ae35d3c2c6e815bcd100288 namespace=k8s.io Oct 8 19:59:30.921762 containerd[1592]: time="2024-10-08T19:59:30.921740467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:31.818001 kubelet[2806]: E1008 19:59:31.817820 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:31.819541 containerd[1592]: time="2024-10-08T19:59:31.819482502Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:59:31.878299 containerd[1592]: time="2024-10-08T19:59:31.878229830Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b6a1cd482fdb6b81abbd25ee4f9b811962d66ca955d8b6fa422b11b6fd8f9ca\"" Oct 8 19:59:31.879004 containerd[1592]: time="2024-10-08T19:59:31.878838056Z" level=info msg="StartContainer for \"0b6a1cd482fdb6b81abbd25ee4f9b811962d66ca955d8b6fa422b11b6fd8f9ca\"" Oct 8 19:59:31.942979 containerd[1592]: time="2024-10-08T19:59:31.942939856Z" level=info msg="StartContainer for \"0b6a1cd482fdb6b81abbd25ee4f9b811962d66ca955d8b6fa422b11b6fd8f9ca\" returns successfully" Oct 8 19:59:31.965794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b6a1cd482fdb6b81abbd25ee4f9b811962d66ca955d8b6fa422b11b6fd8f9ca-rootfs.mount: Deactivated successfully. Oct 8 19:59:31.968940 containerd[1592]: time="2024-10-08T19:59:31.968879095Z" level=info msg="shim disconnected" id=0b6a1cd482fdb6b81abbd25ee4f9b811962d66ca955d8b6fa422b11b6fd8f9ca namespace=k8s.io Oct 8 19:59:31.969045 containerd[1592]: time="2024-10-08T19:59:31.968938410Z" level=warning msg="cleaning up after shim disconnected" id=0b6a1cd482fdb6b81abbd25ee4f9b811962d66ca955d8b6fa422b11b6fd8f9ca namespace=k8s.io Oct 8 19:59:31.969045 containerd[1592]: time="2024-10-08T19:59:31.968949331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:32.822129 kubelet[2806]: E1008 19:59:32.822095 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:32.826053 containerd[1592]: time="2024-10-08T19:59:32.826004975Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:59:33.013353 containerd[1592]: time="2024-10-08T19:59:33.013288747Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d4605e0473eea22149fe7b0ea0e57b8266901570126c5195f3dea4e584d4e56a\"" Oct 8 19:59:33.013909 containerd[1592]: time="2024-10-08T19:59:33.013863230Z" level=info msg="StartContainer for \"d4605e0473eea22149fe7b0ea0e57b8266901570126c5195f3dea4e584d4e56a\"" Oct 8 19:59:33.094062 containerd[1592]: time="2024-10-08T19:59:33.093910233Z" level=info msg="StartContainer for \"d4605e0473eea22149fe7b0ea0e57b8266901570126c5195f3dea4e584d4e56a\" returns successfully" Oct 8 19:59:33.114569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4605e0473eea22149fe7b0ea0e57b8266901570126c5195f3dea4e584d4e56a-rootfs.mount: Deactivated successfully. Oct 8 19:59:33.237132 containerd[1592]: time="2024-10-08T19:59:33.237053305Z" level=info msg="shim disconnected" id=d4605e0473eea22149fe7b0ea0e57b8266901570126c5195f3dea4e584d4e56a namespace=k8s.io Oct 8 19:59:33.237132 containerd[1592]: time="2024-10-08T19:59:33.237126056Z" level=warning msg="cleaning up after shim disconnected" id=d4605e0473eea22149fe7b0ea0e57b8266901570126c5195f3dea4e584d4e56a namespace=k8s.io Oct 8 19:59:33.237132 containerd[1592]: time="2024-10-08T19:59:33.237137437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:33.826734 kubelet[2806]: E1008 19:59:33.826702 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:33.829219 containerd[1592]: time="2024-10-08T19:59:33.829165457Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:59:34.089913 containerd[1592]: time="2024-10-08T19:59:34.089677363Z" level=info msg="CreateContainer within sandbox \"63b062a6e1d81e43416b1590445a6edbe2e20344cc5c3af327ede4ae92df97aa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f599cf51c8e93b72b7024792180a25fc311cc9a7726fcc694bbb07bf35783513\"" Oct 8 19:59:34.090793 containerd[1592]: time="2024-10-08T19:59:34.090751643Z" level=info msg="StartContainer for \"f599cf51c8e93b72b7024792180a25fc311cc9a7726fcc694bbb07bf35783513\"" Oct 8 19:59:34.249990 kubelet[2806]: E1008 19:59:34.249949 2806 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:59:34.252134 containerd[1592]: time="2024-10-08T19:59:34.252067737Z" level=info msg="StartContainer for \"f599cf51c8e93b72b7024792180a25fc311cc9a7726fcc694bbb07bf35783513\" returns successfully" Oct 8 19:59:34.638297 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 8 19:59:34.832558 kubelet[2806]: E1008 19:59:34.832519 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:34.892352 kubelet[2806]: I1008 19:59:34.892220 2806 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hkqzg" podStartSLOduration=5.892166199 podStartE2EDuration="5.892166199s" podCreationTimestamp="2024-10-08 19:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:59:34.891819868 +0000 UTC m=+100.828583334" watchObservedRunningTime="2024-10-08 19:59:34.892166199 +0000 UTC m=+100.828929665" Oct 8 19:59:36.004746 kubelet[2806]: E1008 19:59:36.004694 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:36.580422 kubelet[2806]: I1008 19:59:36.580379 2806 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T19:59:36Z","lastTransitionTime":"2024-10-08T19:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 19:59:38.052561 systemd-networkd[1246]: lxc_health: Link UP Oct 8 19:59:38.063321 systemd-networkd[1246]: lxc_health: Gained carrier Oct 8 19:59:38.625313 systemd[1]: run-containerd-runc-k8s.io-f599cf51c8e93b72b7024792180a25fc311cc9a7726fcc694bbb07bf35783513-runc.8S3gxe.mount: Deactivated successfully. Oct 8 19:59:39.853462 systemd-networkd[1246]: lxc_health: Gained IPv6LL Oct 8 19:59:40.004999 kubelet[2806]: E1008 19:59:40.004959 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:40.741129 systemd[1]: run-containerd-runc-k8s.io-f599cf51c8e93b72b7024792180a25fc311cc9a7726fcc694bbb07bf35783513-runc.a9kGIk.mount: Deactivated successfully. Oct 8 19:59:40.843752 kubelet[2806]: E1008 19:59:40.843719 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:45.055216 sshd[4658]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:45.060702 systemd[1]: sshd@28-10.0.0.70:22-10.0.0.1:48234.service: Deactivated successfully. Oct 8 19:59:45.063853 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 19:59:45.065060 systemd-logind[1573]: Session 29 logged out. Waiting for processes to exit. Oct 8 19:59:45.066480 systemd-logind[1573]: Removed session 29.