Sep 4 17:30:10.889384 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:30:10.889407 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:30:10.889419 kernel: BIOS-provided physical RAM map: Sep 4 17:30:10.889426 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:30:10.889432 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:30:10.889438 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:30:10.889446 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Sep 4 17:30:10.889452 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Sep 4 17:30:10.889458 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 17:30:10.889467 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:30:10.889473 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 17:30:10.889480 kernel: NX (Execute Disable) protection: active Sep 4 17:30:10.889486 kernel: APIC: Static calls initialized Sep 4 17:30:10.889493 kernel: SMBIOS 2.8 present. Sep 4 17:30:10.889501 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 17:30:10.889511 kernel: Hypervisor detected: KVM Sep 4 17:30:10.889517 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:30:10.889528 kernel: kvm-clock: using sched offset of 2872509440 cycles Sep 4 17:30:10.889535 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:30:10.889543 kernel: tsc: Detected 2794.744 MHz processor Sep 4 17:30:10.889553 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:30:10.889560 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:30:10.889568 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Sep 4 17:30:10.889575 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:30:10.889585 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:30:10.889592 kernel: Using GB pages for direct mapping Sep 4 17:30:10.889599 kernel: ACPI: Early table checksum verification disabled Sep 4 17:30:10.889606 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Sep 4 17:30:10.889613 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:30:10.889620 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:30:10.889627 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:30:10.889634 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 17:30:10.889641 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:30:10.889651 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:30:10.889658 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:30:10.889665 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Sep 4 17:30:10.889672 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Sep 4 17:30:10.889679 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 17:30:10.889686 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Sep 4 17:30:10.889694 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Sep 4 17:30:10.889711 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Sep 4 17:30:10.889721 kernel: No NUMA configuration found Sep 4 17:30:10.889728 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Sep 4 17:30:10.889736 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Sep 4 17:30:10.889743 kernel: Zone ranges: Sep 4 17:30:10.889750 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:30:10.889757 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Sep 4 17:30:10.889787 kernel: Normal empty Sep 4 17:30:10.889794 kernel: Movable zone start for each node Sep 4 17:30:10.889801 kernel: Early memory node ranges Sep 4 17:30:10.889808 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:30:10.889816 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Sep 4 17:30:10.889823 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Sep 4 17:30:10.889830 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:30:10.889837 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:30:10.889845 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Sep 4 17:30:10.889854 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 17:30:10.889862 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:30:10.889869 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:30:10.889876 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:30:10.889883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:30:10.889891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:30:10.889898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:30:10.889905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:30:10.889912 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:30:10.889922 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:30:10.889929 kernel: TSC deadline timer available Sep 4 17:30:10.889939 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:30:10.889947 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:30:10.889954 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:30:10.889961 kernel: kvm-guest: setup PV sched yield Sep 4 17:30:10.889968 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Sep 4 17:30:10.889976 kernel: Booting paravirtualized kernel on KVM Sep 4 17:30:10.889983 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:30:10.889993 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:30:10.890002 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:30:10.890010 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:30:10.890017 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:30:10.890024 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:30:10.890031 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:30:10.890040 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:30:10.890047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:30:10.890055 kernel: random: crng init done Sep 4 17:30:10.890065 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:30:10.890072 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:30:10.890079 kernel: Fallback order for Node 0: 0 Sep 4 17:30:10.890087 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Sep 4 17:30:10.890094 kernel: Policy zone: DMA32 Sep 4 17:30:10.890101 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:30:10.890109 kernel: Memory: 2428456K/2571756K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 143040K reserved, 0K cma-reserved) Sep 4 17:30:10.890117 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:30:10.890126 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:30:10.890134 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:30:10.890141 kernel: Dynamic Preempt: voluntary Sep 4 17:30:10.890148 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:30:10.890156 kernel: rcu: RCU event tracing is enabled. Sep 4 17:30:10.890163 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:30:10.890171 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:30:10.890178 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:30:10.890185 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:30:10.890193 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:30:10.890203 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:30:10.890210 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:30:10.890218 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:30:10.890225 kernel: Console: colour VGA+ 80x25 Sep 4 17:30:10.890232 kernel: printk: console [ttyS0] enabled Sep 4 17:30:10.890239 kernel: ACPI: Core revision 20230628 Sep 4 17:30:10.890247 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:30:10.890254 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:30:10.890261 kernel: x2apic enabled Sep 4 17:30:10.890271 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:30:10.890278 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:30:10.890286 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:30:10.890295 kernel: kvm-guest: setup PV IPIs Sep 4 17:30:10.890303 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:30:10.890310 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:30:10.890317 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Sep 4 17:30:10.890325 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:30:10.890343 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:30:10.890350 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:30:10.890358 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:30:10.890366 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:30:10.890376 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:30:10.890383 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:30:10.890391 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:30:10.890399 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:30:10.890406 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:30:10.890417 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:30:10.890424 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:30:10.890435 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:30:10.890443 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:30:10.890450 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:30:10.890458 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:30:10.890466 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:30:10.890473 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:30:10.890484 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:30:10.890504 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:30:10.890512 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:30:10.890520 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:30:10.890528 kernel: SELinux: Initializing. Sep 4 17:30:10.890536 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:30:10.890543 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:30:10.890551 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:30:10.890561 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:30:10.890569 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:30:10.890577 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:30:10.890585 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:30:10.890592 kernel: ... version: 0 Sep 4 17:30:10.890600 kernel: ... bit width: 48 Sep 4 17:30:10.890607 kernel: ... generic registers: 6 Sep 4 17:30:10.890615 kernel: ... value mask: 0000ffffffffffff Sep 4 17:30:10.890623 kernel: ... max period: 00007fffffffffff Sep 4 17:30:10.890630 kernel: ... fixed-purpose events: 0 Sep 4 17:30:10.890640 kernel: ... event mask: 000000000000003f Sep 4 17:30:10.890648 kernel: signal: max sigframe size: 1776 Sep 4 17:30:10.890656 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:30:10.890663 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:30:10.890671 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:30:10.890679 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:30:10.890686 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:30:10.890697 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:30:10.890721 kernel: smpboot: Max logical packages: 1 Sep 4 17:30:10.890731 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Sep 4 17:30:10.890738 kernel: devtmpfs: initialized Sep 4 17:30:10.890746 kernel: x86/mm: Memory block size: 128MB Sep 4 17:30:10.890754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:30:10.890884 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:30:10.890892 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:30:10.890899 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:30:10.890907 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:30:10.890915 kernel: audit: type=2000 audit(1725471010.807:1): state=initialized audit_enabled=0 res=1 Sep 4 17:30:10.890926 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:30:10.890934 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:30:10.890941 kernel: cpuidle: using governor menu Sep 4 17:30:10.890949 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:30:10.890956 kernel: dca service started, version 1.12.1 Sep 4 17:30:10.890964 kernel: PCI: Using configuration type 1 for base access Sep 4 17:30:10.890972 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:30:10.890979 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:30:10.890987 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:30:10.890997 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:30:10.891005 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:30:10.891012 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:30:10.891020 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:30:10.891027 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:30:10.891035 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:30:10.891042 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:30:10.891050 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:30:10.891057 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:30:10.891068 kernel: ACPI: Interpreter enabled Sep 4 17:30:10.891075 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:30:10.891083 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:30:10.891090 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:30:10.891098 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:30:10.891105 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:30:10.891113 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:30:10.891336 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:30:10.891353 kernel: acpiphp: Slot [3] registered Sep 4 17:30:10.891361 kernel: acpiphp: Slot [4] registered Sep 4 17:30:10.891368 kernel: acpiphp: Slot [5] registered Sep 4 17:30:10.891376 kernel: acpiphp: Slot [6] registered Sep 4 17:30:10.891383 kernel: acpiphp: Slot [7] registered Sep 4 17:30:10.891391 kernel: acpiphp: Slot [8] registered Sep 4 17:30:10.891398 kernel: acpiphp: Slot [9] registered Sep 4 17:30:10.891406 kernel: acpiphp: Slot [10] registered Sep 4 17:30:10.891413 kernel: acpiphp: Slot [11] registered Sep 4 17:30:10.891423 kernel: acpiphp: Slot [12] registered Sep 4 17:30:10.891431 kernel: acpiphp: Slot [13] registered Sep 4 17:30:10.891438 kernel: acpiphp: Slot [14] registered Sep 4 17:30:10.891446 kernel: acpiphp: Slot [15] registered Sep 4 17:30:10.891453 kernel: acpiphp: Slot [16] registered Sep 4 17:30:10.891461 kernel: acpiphp: Slot [17] registered Sep 4 17:30:10.891468 kernel: acpiphp: Slot [18] registered Sep 4 17:30:10.891476 kernel: acpiphp: Slot [19] registered Sep 4 17:30:10.891483 kernel: acpiphp: Slot [20] registered Sep 4 17:30:10.891491 kernel: acpiphp: Slot [21] registered Sep 4 17:30:10.891501 kernel: acpiphp: Slot [22] registered Sep 4 17:30:10.891508 kernel: acpiphp: Slot [23] registered Sep 4 17:30:10.891516 kernel: acpiphp: Slot [24] registered Sep 4 17:30:10.891523 kernel: acpiphp: Slot [25] registered Sep 4 17:30:10.891531 kernel: acpiphp: Slot [26] registered Sep 4 17:30:10.891538 kernel: acpiphp: Slot [27] registered Sep 4 17:30:10.891546 kernel: acpiphp: Slot [28] registered Sep 4 17:30:10.891553 kernel: acpiphp: Slot [29] registered Sep 4 17:30:10.891561 kernel: acpiphp: Slot [30] registered Sep 4 17:30:10.891570 kernel: acpiphp: Slot [31] registered Sep 4 17:30:10.891578 kernel: PCI host bridge to bus 0000:00 Sep 4 17:30:10.891738 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:30:10.891879 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:30:10.891996 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:30:10.892112 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:30:10.892227 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 17:30:10.892342 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:30:10.892505 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:30:10.892650 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:30:10.892818 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:30:10.892971 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:30:10.893096 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:30:10.893229 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:30:10.893361 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:30:10.893487 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:30:10.893648 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:30:10.893805 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 17:30:10.893935 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 17:30:10.894079 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:30:10.894212 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 4 17:30:10.894337 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 4 17:30:10.894462 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 4 17:30:10.894586 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:30:10.894740 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:30:10.894891 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:30:10.895022 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 4 17:30:10.895155 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 17:30:10.895302 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:30:10.895430 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:30:10.895556 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 4 17:30:10.895681 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 17:30:10.895858 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:30:10.895988 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:30:10.896121 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 4 17:30:10.896247 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 17:30:10.896372 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 4 17:30:10.896382 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:30:10.896391 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:30:10.896399 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:30:10.896406 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:30:10.896414 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:30:10.896426 kernel: iommu: Default domain type: Translated Sep 4 17:30:10.896433 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:30:10.896441 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:30:10.896449 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:30:10.896456 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:30:10.896464 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Sep 4 17:30:10.896589 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:30:10.896723 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:30:10.896910 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:30:10.896926 kernel: vgaarb: loaded Sep 4 17:30:10.896934 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:30:10.896942 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:30:10.896949 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:30:10.896957 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:30:10.896965 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:30:10.896972 kernel: pnp: PnP ACPI init Sep 4 17:30:10.897118 kernel: pnp 00:02: [dma 2] Sep 4 17:30:10.897133 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:30:10.897141 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:30:10.897149 kernel: NET: Registered PF_INET protocol family Sep 4 17:30:10.897157 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:30:10.897165 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:30:10.897173 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:30:10.897180 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:30:10.897188 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:30:10.897196 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:30:10.897206 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:30:10.897215 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:30:10.897224 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:30:10.897233 kernel: NET: Registered PF_XDP protocol family Sep 4 17:30:10.897352 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:30:10.897467 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:30:10.897581 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:30:10.897696 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:30:10.897839 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 17:30:10.897966 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:30:10.898092 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:30:10.898103 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:30:10.898111 kernel: Initialise system trusted keyrings Sep 4 17:30:10.898119 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:30:10.898126 kernel: Key type asymmetric registered Sep 4 17:30:10.898134 kernel: Asymmetric key parser 'x509' registered Sep 4 17:30:10.898142 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:30:10.898155 kernel: io scheduler mq-deadline registered Sep 4 17:30:10.898162 kernel: io scheduler kyber registered Sep 4 17:30:10.898170 kernel: io scheduler bfq registered Sep 4 17:30:10.898178 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:30:10.898186 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:30:10.898194 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:30:10.898201 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:30:10.898209 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:30:10.898217 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:30:10.898227 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:30:10.898235 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:30:10.898243 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:30:10.898392 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:30:10.898404 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:30:10.898519 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:30:10.898636 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:30:10 UTC (1725471010) Sep 4 17:30:10.898813 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:30:10.898829 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:30:10.898837 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:30:10.898845 kernel: Segment Routing with IPv6 Sep 4 17:30:10.898852 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:30:10.898860 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:30:10.898868 kernel: Key type dns_resolver registered Sep 4 17:30:10.898875 kernel: IPI shorthand broadcast: enabled Sep 4 17:30:10.898883 kernel: sched_clock: Marking stable (806002483, 149499629)->(979164011, -23661899) Sep 4 17:30:10.898890 kernel: registered taskstats version 1 Sep 4 17:30:10.898901 kernel: Loading compiled-in X.509 certificates Sep 4 17:30:10.898908 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:30:10.898916 kernel: Key type .fscrypt registered Sep 4 17:30:10.898924 kernel: Key type fscrypt-provisioning registered Sep 4 17:30:10.898931 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:30:10.898939 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:30:10.898947 kernel: ima: No architecture policies found Sep 4 17:30:10.898954 kernel: clk: Disabling unused clocks Sep 4 17:30:10.898965 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:30:10.898972 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:30:10.898980 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:30:10.898988 kernel: Run /init as init process Sep 4 17:30:10.898995 kernel: with arguments: Sep 4 17:30:10.899002 kernel: /init Sep 4 17:30:10.899010 kernel: with environment: Sep 4 17:30:10.899017 kernel: HOME=/ Sep 4 17:30:10.899041 kernel: TERM=linux Sep 4 17:30:10.899051 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:30:10.899064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:30:10.899074 systemd[1]: Detected virtualization kvm. Sep 4 17:30:10.899083 systemd[1]: Detected architecture x86-64. Sep 4 17:30:10.899091 systemd[1]: Running in initrd. Sep 4 17:30:10.899099 systemd[1]: No hostname configured, using default hostname. Sep 4 17:30:10.899108 systemd[1]: Hostname set to . Sep 4 17:30:10.899119 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:30:10.899127 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:30:10.899136 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:30:10.899144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:30:10.899153 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:30:10.899162 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:30:10.899170 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:30:10.899179 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:30:10.899192 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:30:10.899203 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:30:10.899212 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:30:10.899222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:30:10.899231 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:30:10.899239 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:30:10.899248 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:30:10.899258 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:30:10.899267 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:30:10.899276 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:30:10.899284 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:30:10.899293 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:30:10.899301 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:30:10.899310 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:30:10.899318 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:30:10.899327 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:30:10.899338 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:30:10.899346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:30:10.899355 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:30:10.899363 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:30:10.899372 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:30:10.899385 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:30:10.899394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:30:10.899402 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:30:10.899411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:30:10.899419 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:30:10.899429 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:30:10.899458 systemd-journald[193]: Collecting audit messages is disabled. Sep 4 17:30:10.899477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:30:10.899489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:30:10.899498 systemd-journald[193]: Journal started Sep 4 17:30:10.899517 systemd-journald[193]: Runtime Journal (/run/log/journal/705077f13db74bc88f7501ea593ccc39) is 6.0M, max 48.4M, 42.3M free. Sep 4 17:30:10.902565 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 17:30:10.928855 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:30:10.937783 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:30:10.940529 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 17:30:10.941513 kernel: Bridge firewalling registered Sep 4 17:30:10.943217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:30:10.944678 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:30:10.961061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:30:10.965092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:30:10.968098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:30:10.969722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:30:10.980952 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:30:10.997908 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:30:10.998685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:30:11.000542 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:30:11.007553 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:30:11.010244 dracut-cmdline[225]: dracut-dracut-053 Sep 4 17:30:11.013517 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:30:11.046110 systemd-resolved[235]: Positive Trust Anchors: Sep 4 17:30:11.046131 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:30:11.046162 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:30:11.048704 systemd-resolved[235]: Defaulting to hostname 'linux'. Sep 4 17:30:11.049994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:30:11.055873 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:30:11.124823 kernel: SCSI subsystem initialized Sep 4 17:30:11.135789 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:30:11.148798 kernel: iscsi: registered transport (tcp) Sep 4 17:30:11.174103 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:30:11.174142 kernel: QLogic iSCSI HBA Driver Sep 4 17:30:11.230283 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:30:11.239007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:30:11.265995 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:30:11.266023 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:30:11.267039 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:30:11.313788 kernel: raid6: avx2x4 gen() 30292 MB/s Sep 4 17:30:11.330783 kernel: raid6: avx2x2 gen() 30849 MB/s Sep 4 17:30:11.347874 kernel: raid6: avx2x1 gen() 25914 MB/s Sep 4 17:30:11.347891 kernel: raid6: using algorithm avx2x2 gen() 30849 MB/s Sep 4 17:30:11.365868 kernel: raid6: .... xor() 19972 MB/s, rmw enabled Sep 4 17:30:11.365883 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:30:11.390789 kernel: xor: automatically using best checksumming function avx Sep 4 17:30:11.564790 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:30:11.579972 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:30:11.588022 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:30:11.600282 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 4 17:30:11.604865 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:30:11.616933 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:30:11.632472 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 4 17:30:11.667031 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:30:11.682947 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:30:11.749884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:30:11.759953 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:30:11.773411 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:30:11.775360 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:30:11.778284 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:30:11.779528 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:30:11.788893 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:30:11.794849 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:30:11.796871 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:30:11.802808 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:30:11.807781 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:30:11.812787 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:30:11.812821 kernel: libata version 3.00 loaded. Sep 4 17:30:11.812834 kernel: GPT:9289727 != 19775487 Sep 4 17:30:11.812973 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:30:11.814026 kernel: GPT:9289727 != 19775487 Sep 4 17:30:11.814051 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:30:11.815780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:30:11.817860 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:30:11.818544 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:30:11.819138 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:30:11.823544 kernel: scsi host0: ata_piix Sep 4 17:30:11.823757 kernel: scsi host1: ata_piix Sep 4 17:30:11.823955 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:30:11.823970 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:30:11.825241 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:30:11.826635 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:30:11.827264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:30:11.836288 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:30:11.836748 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:30:11.839793 kernel: AES CTR mode by8 optimization enabled Sep 4 17:30:11.844787 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (467) Sep 4 17:30:11.847843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:30:11.854928 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (477) Sep 4 17:30:11.870236 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:30:11.898416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:30:11.904935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:30:11.909313 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:30:11.910576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:30:11.917568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:30:11.931972 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:30:11.935157 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:30:11.959142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:30:11.980796 kernel: ata2: found unknown device (class 0) Sep 4 17:30:11.981836 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:30:11.983792 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:30:12.027805 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:30:12.028025 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:30:12.040932 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:30:12.055494 disk-uuid[542]: Primary Header is updated. Sep 4 17:30:12.055494 disk-uuid[542]: Secondary Entries is updated. Sep 4 17:30:12.055494 disk-uuid[542]: Secondary Header is updated. Sep 4 17:30:12.059067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:30:12.062798 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:30:13.090632 disk-uuid[564]: The operation has completed successfully. Sep 4 17:30:13.091979 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:30:13.120986 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:30:13.121116 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:30:13.146903 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:30:13.150609 sh[579]: Success Sep 4 17:30:13.164839 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:30:13.200739 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:30:13.219414 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:30:13.222546 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:30:13.235935 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:30:13.235969 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:30:13.235980 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:30:13.237956 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:30:13.237970 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:30:13.242907 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:30:13.244439 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:30:13.252901 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:30:13.255352 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:30:13.264362 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:30:13.264396 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:30:13.264411 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:30:13.266795 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:30:13.277839 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:30:13.279553 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:30:13.374378 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:30:13.388890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:30:13.410399 systemd-networkd[757]: lo: Link UP Sep 4 17:30:13.410410 systemd-networkd[757]: lo: Gained carrier Sep 4 17:30:13.412243 systemd-networkd[757]: Enumeration completed Sep 4 17:30:13.412658 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:30:13.412662 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:30:13.413030 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:30:13.413503 systemd-networkd[757]: eth0: Link UP Sep 4 17:30:13.413507 systemd-networkd[757]: eth0: Gained carrier Sep 4 17:30:13.413514 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:30:13.423711 systemd[1]: Reached target network.target - Network. Sep 4 17:30:13.432808 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:30:13.469639 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:30:13.480903 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:30:13.528820 ignition[762]: Ignition 2.18.0 Sep 4 17:30:13.528833 ignition[762]: Stage: fetch-offline Sep 4 17:30:13.528885 ignition[762]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:30:13.528897 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:30:13.529110 ignition[762]: parsed url from cmdline: "" Sep 4 17:30:13.529114 ignition[762]: no config URL provided Sep 4 17:30:13.529120 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:30:13.529130 ignition[762]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:30:13.529162 ignition[762]: op(1): [started] loading QEMU firmware config module Sep 4 17:30:13.529169 ignition[762]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:30:13.537506 ignition[762]: op(1): [finished] loading QEMU firmware config module Sep 4 17:30:13.576132 ignition[762]: parsing config with SHA512: a878867dbfe97e5491301c8f59a0547bfb7401248f5074465cd3c7fc59323173d281981ffaf5eec2b14e0620c0d90b491c18616368eb76cbcb989d0d323e90a7 Sep 4 17:30:13.579652 unknown[762]: fetched base config from "system" Sep 4 17:30:13.579664 unknown[762]: fetched user config from "qemu" Sep 4 17:30:13.580065 ignition[762]: fetch-offline: fetch-offline passed Sep 4 17:30:13.580118 ignition[762]: Ignition finished successfully Sep 4 17:30:13.585148 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:30:13.585784 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:30:13.596893 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:30:13.612732 ignition[773]: Ignition 2.18.0 Sep 4 17:30:13.612744 ignition[773]: Stage: kargs Sep 4 17:30:13.612903 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:30:13.612915 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:30:13.616674 ignition[773]: kargs: kargs passed Sep 4 17:30:13.616732 ignition[773]: Ignition finished successfully Sep 4 17:30:13.621952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:30:13.630964 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:30:13.643934 ignition[781]: Ignition 2.18.0 Sep 4 17:30:13.643946 ignition[781]: Stage: disks Sep 4 17:30:13.644112 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:30:13.644124 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:30:13.645031 ignition[781]: disks: disks passed Sep 4 17:30:13.645076 ignition[781]: Ignition finished successfully Sep 4 17:30:13.651011 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:30:13.652263 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:30:13.654051 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:30:13.655263 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:30:13.657270 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:30:13.658289 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:30:13.667902 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:30:13.687023 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:30:13.784413 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:30:13.793892 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:30:13.903786 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:30:13.904550 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:30:13.905742 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:30:13.917863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:30:13.919682 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:30:13.920958 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:30:13.920998 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:30:13.932939 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Sep 4 17:30:13.932959 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:30:13.932971 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:30:13.932981 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:30:13.932992 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:30:13.921020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:30:13.928225 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:30:13.934185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:30:13.938322 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:30:13.978555 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:30:13.983563 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:30:13.988428 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:30:13.993328 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:30:14.079207 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:30:14.088932 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:30:14.090664 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:30:14.097786 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:30:14.116055 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:30:14.164197 ignition[919]: INFO : Ignition 2.18.0 Sep 4 17:30:14.164197 ignition[919]: INFO : Stage: mount Sep 4 17:30:14.165923 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:30:14.165923 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:30:14.165923 ignition[919]: INFO : mount: mount passed Sep 4 17:30:14.165923 ignition[919]: INFO : Ignition finished successfully Sep 4 17:30:14.167459 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:30:14.179836 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:30:14.234911 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:30:14.247903 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:30:14.261783 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Sep 4 17:30:14.263869 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:30:14.263884 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:30:14.263895 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:30:14.266782 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:30:14.268254 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:30:14.289091 ignition[946]: INFO : Ignition 2.18.0 Sep 4 17:30:14.289091 ignition[946]: INFO : Stage: files Sep 4 17:30:14.290797 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:30:14.290797 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:30:14.290797 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:30:14.308776 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:30:14.308776 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:30:14.308776 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:30:14.308776 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:30:14.314371 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:30:14.314371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:30:14.314371 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:30:14.308983 unknown[946]: wrote ssh authorized keys file for user: core Sep 4 17:30:14.374773 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:30:14.499899 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:30:14.501852 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:30:14.501852 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 17:30:14.843045 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:30:14.981689 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:30:14.983777 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:30:14.987874 systemd-networkd[757]: eth0: Gained IPv6LL Sep 4 17:30:15.193994 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:30:15.577163 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:30:15.577163 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:30:15.581413 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:30:15.581413 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:30:15.581413 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:30:15.581413 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 17:30:15.581413 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:30:15.581413 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:30:15.581413 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 17:30:15.581413 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:30:15.604953 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:30:15.609929 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:30:15.611590 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:30:15.611590 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:30:15.611590 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:30:15.611590 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:30:15.611590 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:30:15.611590 ignition[946]: INFO : files: files passed Sep 4 17:30:15.611590 ignition[946]: INFO : Ignition finished successfully Sep 4 17:30:15.613424 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:30:15.629920 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:30:15.632736 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:30:15.634710 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:30:15.634841 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:30:15.644427 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:30:15.647542 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:30:15.647542 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:30:15.650899 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:30:15.650736 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:30:15.652299 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:30:15.665899 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:30:15.691778 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:30:15.691922 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:30:15.694258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:30:15.696367 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:30:15.698401 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:30:15.704971 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:30:15.718045 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:30:15.735944 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:30:15.746209 systemd[1]: Stopped target network.target - Network. Sep 4 17:30:15.748391 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:30:15.750701 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:30:15.753137 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:30:15.754990 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:30:15.756019 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:30:15.758744 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:30:15.761029 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:30:15.762960 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:30:15.765354 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:30:15.767655 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:30:15.770069 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:30:15.772525 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:30:15.775458 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:30:15.777722 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:30:15.779782 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:30:15.781676 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:30:15.782926 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:30:15.785432 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:30:15.787890 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:30:15.790630 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:30:15.791742 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:30:15.794350 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:30:15.795442 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:30:15.797723 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:30:15.798812 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:30:15.801176 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:30:15.802946 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:30:15.804056 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:30:15.806790 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:30:15.808608 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:30:15.810466 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:30:15.811374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:30:15.813346 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:30:15.814250 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:30:15.816287 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:30:15.817461 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:30:15.820114 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:30:15.821127 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:30:15.839917 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:30:15.841797 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:30:15.842827 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:30:15.845963 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:30:15.848028 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:30:15.850413 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:30:15.852559 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:30:15.853050 systemd-networkd[757]: eth0: DHCPv6 lease lost Sep 4 17:30:15.854467 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:30:15.857055 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:30:15.858124 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:30:15.860430 ignition[1001]: INFO : Ignition 2.18.0 Sep 4 17:30:15.860430 ignition[1001]: INFO : Stage: umount Sep 4 17:30:15.860430 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:30:15.860430 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:30:15.865110 ignition[1001]: INFO : umount: umount passed Sep 4 17:30:15.865110 ignition[1001]: INFO : Ignition finished successfully Sep 4 17:30:15.868072 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:30:15.868256 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:30:15.872070 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:30:15.872253 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:30:15.874586 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:30:15.875449 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:30:15.875598 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:30:15.878900 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:30:15.879072 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:30:15.883187 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:30:15.883256 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:30:15.885727 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:30:15.885812 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:30:15.888229 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:30:15.888297 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:30:15.890301 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:30:15.890366 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:30:15.892408 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:30:15.892473 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:30:15.905868 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:30:15.906907 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:30:15.906978 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:30:15.909240 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:30:15.909312 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:30:15.911797 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:30:15.911866 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:30:15.914418 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:30:15.914486 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:30:15.916878 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:30:15.926899 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:30:15.927069 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:30:15.937728 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:30:15.938008 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:30:15.938755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:30:15.938823 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:30:15.941611 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:30:15.941665 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:30:15.942076 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:30:15.942141 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:30:15.942794 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:30:15.942843 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:30:15.943594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:30:15.943646 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:30:15.963025 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:30:15.965215 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:30:15.965290 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:30:15.967489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:30:15.967542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:30:15.970840 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:30:15.970964 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:30:16.396157 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:30:16.396351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:30:16.397214 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:30:16.400923 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:30:16.401004 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:30:16.417920 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:30:16.425133 systemd[1]: Switching root. Sep 4 17:30:16.456111 systemd-journald[193]: Journal stopped Sep 4 17:30:18.151421 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 4 17:30:18.151501 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:30:18.151522 kernel: SELinux: policy capability open_perms=1 Sep 4 17:30:18.151534 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:30:18.151546 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:30:18.151561 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:30:18.151572 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:30:18.151584 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:30:18.151595 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:30:18.151607 kernel: audit: type=1403 audit(1725471017.335:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:30:18.151626 systemd[1]: Successfully loaded SELinux policy in 40.387ms. Sep 4 17:30:18.151653 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.984ms. Sep 4 17:30:18.151666 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:30:18.151679 systemd[1]: Detected virtualization kvm. Sep 4 17:30:18.151693 systemd[1]: Detected architecture x86-64. Sep 4 17:30:18.151706 systemd[1]: Detected first boot. Sep 4 17:30:18.151718 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:30:18.151730 zram_generator::config[1044]: No configuration found. Sep 4 17:30:18.151743 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:30:18.151755 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:30:18.151788 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:30:18.151801 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:30:18.151817 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:30:18.151829 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:30:18.151841 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:30:18.151853 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:30:18.151865 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:30:18.151877 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:30:18.151890 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:30:18.151902 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:30:18.151914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:30:18.151929 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:30:18.151942 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:30:18.151954 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:30:18.151967 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:30:18.151979 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:30:18.151991 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:30:18.152004 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:30:18.152016 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:30:18.152028 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:30:18.152043 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:30:18.152056 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:30:18.152068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:30:18.152081 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:30:18.152093 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:30:18.152105 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:30:18.152117 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:30:18.152129 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:30:18.152144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:30:18.152162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:30:18.152174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:30:18.152186 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:30:18.152198 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:30:18.152211 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:30:18.152223 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:30:18.152235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:30:18.152247 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:30:18.152262 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:30:18.152274 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:30:18.152287 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:30:18.152304 systemd[1]: Reached target machines.target - Containers. Sep 4 17:30:18.152316 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:30:18.152330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:30:18.152342 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:30:18.152358 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:30:18.152373 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:30:18.152385 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:30:18.152397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:30:18.152409 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:30:18.152422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:30:18.152434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:30:18.152446 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:30:18.152459 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:30:18.152474 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:30:18.152486 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:30:18.152498 kernel: loop: module loaded Sep 4 17:30:18.152510 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:30:18.152529 kernel: fuse: init (API version 7.39) Sep 4 17:30:18.152542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:30:18.152554 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:30:18.152566 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:30:18.152578 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:30:18.152590 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:30:18.152607 systemd[1]: Stopped verity-setup.service. Sep 4 17:30:18.152636 systemd-journald[1106]: Collecting audit messages is disabled. Sep 4 17:30:18.152662 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:30:18.152675 systemd-journald[1106]: Journal started Sep 4 17:30:18.152697 systemd-journald[1106]: Runtime Journal (/run/log/journal/705077f13db74bc88f7501ea593ccc39) is 6.0M, max 48.4M, 42.3M free. Sep 4 17:30:17.903242 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:30:17.921900 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:30:17.922344 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:30:18.155779 kernel: ACPI: bus type drm_connector registered Sep 4 17:30:18.155805 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:30:18.158291 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:30:18.159467 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:30:18.160748 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:30:18.161961 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:30:18.163356 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:30:18.164834 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:30:18.166107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:30:18.167662 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:30:18.167882 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:30:18.171146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:30:18.171332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:30:18.172784 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:30:18.172975 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:30:18.174483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:30:18.174687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:30:18.176262 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:30:18.176443 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:30:18.178060 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:30:18.178271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:30:18.179720 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:30:18.181168 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:30:18.182721 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:30:18.198875 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:30:18.206898 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:30:18.211980 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:30:18.213377 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:30:18.213415 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:30:18.215571 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:30:18.218053 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:30:18.237888 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:30:18.239128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:30:18.248012 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:30:18.251113 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:30:18.252428 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:30:18.255937 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:30:18.257289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:30:18.261887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:30:18.266914 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:30:18.270746 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:30:18.272151 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:30:18.276005 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:30:18.276954 systemd-journald[1106]: Time spent on flushing to /var/log/journal/705077f13db74bc88f7501ea593ccc39 is 15.287ms for 950 entries. Sep 4 17:30:18.276954 systemd-journald[1106]: System Journal (/var/log/journal/705077f13db74bc88f7501ea593ccc39) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:30:18.622937 systemd-journald[1106]: Received client request to flush runtime journal. Sep 4 17:30:18.623105 kernel: loop0: detected capacity change from 0 to 211296 Sep 4 17:30:18.623142 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:30:18.623264 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:30:18.623285 kernel: loop1: detected capacity change from 0 to 80568 Sep 4 17:30:18.623302 kernel: loop2: detected capacity change from 0 to 139904 Sep 4 17:30:18.623327 kernel: loop3: detected capacity change from 0 to 211296 Sep 4 17:30:18.623347 kernel: loop4: detected capacity change from 0 to 80568 Sep 4 17:30:18.623363 kernel: loop5: detected capacity change from 0 to 139904 Sep 4 17:30:18.278441 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:30:18.285141 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:30:18.304292 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:30:18.317992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:30:18.394133 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:30:18.396331 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:30:18.405984 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:30:18.412277 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:30:18.415869 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:30:18.454127 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:30:18.463846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:30:18.558540 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 4 17:30:18.558554 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 4 17:30:18.569676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:30:18.606511 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:30:18.607089 (sd-merge)[1179]: Merged extensions into '/usr'. Sep 4 17:30:18.612778 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:30:18.612789 systemd[1]: Reloading... Sep 4 17:30:18.683814 zram_generator::config[1202]: No configuration found. Sep 4 17:30:18.803280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:30:18.808395 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:30:18.853531 systemd[1]: Reloading finished in 240 ms. Sep 4 17:30:18.887004 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:30:18.888611 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:30:18.890107 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:30:18.891742 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:30:18.893308 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:30:18.934133 systemd[1]: Starting ensure-sysext.service... Sep 4 17:30:18.936484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:30:18.947853 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:30:18.947874 systemd[1]: Reloading... Sep 4 17:30:18.970484 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:30:18.970910 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:30:18.971957 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:30:18.972280 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 4 17:30:18.972363 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 4 17:30:18.975800 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:30:18.975812 systemd-tmpfiles[1245]: Skipping /boot Sep 4 17:30:18.993465 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:30:18.993485 systemd-tmpfiles[1245]: Skipping /boot Sep 4 17:30:19.016928 zram_generator::config[1269]: No configuration found. Sep 4 17:30:19.127667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:30:19.176544 systemd[1]: Reloading finished in 228 ms. Sep 4 17:30:19.200426 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:30:19.218991 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:30:19.229566 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:30:19.232063 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:30:19.236081 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:30:19.239776 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:30:19.244133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:30:19.244306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:30:19.245430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:30:19.249113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:30:19.253710 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:30:19.254889 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:30:19.255040 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:30:19.256184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:30:19.256418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:30:19.266005 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:30:19.266235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:30:19.268096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:30:19.268289 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:30:19.278265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:30:19.278474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:30:19.289448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:30:19.294781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:30:19.304312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:30:19.305444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:30:19.308573 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:30:19.309725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:30:19.312125 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:30:19.314309 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:30:19.317256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:30:19.317544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:30:19.326171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:30:19.326441 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:30:19.341226 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:30:19.341441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:30:19.341575 augenrules[1337]: No rules Sep 4 17:30:19.343934 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:30:19.355799 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:30:19.356080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:30:19.361976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:30:19.367982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:30:19.371461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:30:19.374543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:30:19.375997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:30:19.376120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:30:19.377234 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:30:19.379039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:30:19.379220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:30:19.381357 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:30:19.381974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:30:19.384555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:30:19.384942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:30:19.392571 systemd[1]: Finished ensure-sysext.service. Sep 4 17:30:19.397588 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:30:19.399524 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:30:19.399740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:30:19.402691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:30:19.402824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:30:19.410020 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:30:19.411200 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:30:19.423247 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:30:19.437031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:30:19.440550 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:30:19.451219 systemd-resolved[1314]: Positive Trust Anchors: Sep 4 17:30:19.451242 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:30:19.451276 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:30:19.454867 systemd-resolved[1314]: Defaulting to hostname 'linux'. Sep 4 17:30:19.456776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:30:19.462368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:30:19.467968 systemd-udevd[1365]: Using default interface naming scheme 'v255'. Sep 4 17:30:19.468907 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:30:19.470376 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:30:19.471964 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:30:19.485661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:30:19.496173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:30:19.537791 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1379) Sep 4 17:30:19.542757 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:30:19.556114 systemd-networkd[1374]: lo: Link UP Sep 4 17:30:19.556129 systemd-networkd[1374]: lo: Gained carrier Sep 4 17:30:19.557958 systemd-networkd[1374]: Enumeration completed Sep 4 17:30:19.558046 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:30:19.559338 systemd[1]: Reached target network.target - Network. Sep 4 17:30:19.561511 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:30:19.561520 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:30:19.562687 systemd-networkd[1374]: eth0: Link UP Sep 4 17:30:19.562692 systemd-networkd[1374]: eth0: Gained carrier Sep 4 17:30:19.562705 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:30:19.598786 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1380) Sep 4 17:30:19.625224 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:30:19.627251 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:30:19.629834 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 17:30:19.633789 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:30:19.637830 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:30:19.641375 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Sep 4 17:30:19.642743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:30:20.782058 systemd-timesyncd[1363]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:30:20.782102 systemd-timesyncd[1363]: Initial clock synchronization to Wed 2024-09-04 17:30:20.781945 UTC. Sep 4 17:30:20.782150 systemd-resolved[1314]: Clock change detected. Flushing caches. Sep 4 17:30:20.791289 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:30:20.816818 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 17:30:20.818041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:30:20.819680 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:30:20.822783 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 4 17:30:20.835786 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:30:20.956833 kernel: kvm_amd: TSC scaling supported Sep 4 17:30:20.957034 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:30:20.957055 kernel: kvm_amd: Nested Paging enabled Sep 4 17:30:20.957076 kernel: kvm_amd: LBR virtualization supported Sep 4 17:30:20.957095 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:30:20.957135 kernel: kvm_amd: Virtual GIF supported Sep 4 17:30:20.971684 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:30:20.982985 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:30:21.017419 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:30:21.029982 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:30:21.038678 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:30:21.074274 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:30:21.075861 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:30:21.077008 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:30:21.078199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:30:21.079522 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:30:21.081015 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:30:21.082239 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:30:21.083547 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:30:21.084834 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:30:21.084866 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:30:21.085812 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:30:21.087499 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:30:21.090440 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:30:21.102528 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:30:21.104924 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:30:21.106522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:30:21.107691 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:30:21.108665 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:30:21.109652 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:30:21.109682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:30:21.110687 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:30:21.112757 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:30:21.117075 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:30:21.117889 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:30:21.121855 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:30:21.122956 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:30:21.125794 jq[1419]: false Sep 4 17:30:21.126970 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:30:21.129885 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:30:21.133240 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:30:21.136947 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:30:21.143936 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:30:21.145457 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:30:21.145923 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:30:21.148969 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:30:21.151897 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:30:21.153751 dbus-daemon[1418]: [system] SELinux support is enabled Sep 4 17:30:21.154066 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:30:21.155849 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:30:21.156305 extend-filesystems[1420]: Found loop3 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found loop4 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found loop5 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found sr0 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found vda Sep 4 17:30:21.158256 extend-filesystems[1420]: Found vda1 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found vda2 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found vda3 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found usr Sep 4 17:30:21.158256 extend-filesystems[1420]: Found vda4 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found vda6 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found vda7 Sep 4 17:30:21.158256 extend-filesystems[1420]: Found vda9 Sep 4 17:30:21.158256 extend-filesystems[1420]: Checking size of /dev/vda9 Sep 4 17:30:21.165059 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:30:21.165277 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:30:21.169121 jq[1433]: true Sep 4 17:30:21.165623 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:30:21.165848 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:30:21.168317 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:30:21.169930 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:30:21.179321 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:30:21.183646 jq[1438]: true Sep 4 17:30:21.192129 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:30:21.195430 extend-filesystems[1420]: Resized partition /dev/vda9 Sep 4 17:30:21.197593 update_engine[1430]: I0904 17:30:21.193982 1430 main.cc:92] Flatcar Update Engine starting Sep 4 17:30:21.192170 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:30:21.193540 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:30:21.193562 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:30:21.200129 tar[1437]: linux-amd64/helm Sep 4 17:30:21.204913 extend-filesystems[1456]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:30:21.207979 update_engine[1430]: I0904 17:30:21.206063 1430 update_check_scheduler.cc:74] Next update check in 8m10s Sep 4 17:30:21.204931 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:30:21.215959 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:30:21.261571 systemd-logind[1428]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:30:21.261598 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:30:21.262429 systemd-logind[1428]: New seat seat0. Sep 4 17:30:21.263886 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:30:21.272440 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:30:21.286758 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1373) Sep 4 17:30:21.495126 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:30:21.507813 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:30:21.529172 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:30:21.543003 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:30:21.554284 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:30:21.554572 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:30:21.557451 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:30:21.621801 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:30:21.674023 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:30:21.685065 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:30:21.687272 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:30:22.211357 containerd[1440]: time="2024-09-04T17:30:22.210738122Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:30:21.688532 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:30:21.885974 systemd-networkd[1374]: eth0: Gained IPv6LL Sep 4 17:30:21.890094 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:30:21.892034 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:30:21.910006 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:30:21.960937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:21.963349 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:30:21.987333 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:30:21.987582 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:30:21.989077 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:30:22.227352 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:30:22.247151 containerd[1440]: time="2024-09-04T17:30:22.247114212Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:30:22.247231 containerd[1440]: time="2024-09-04T17:30:22.247157593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249046 containerd[1440]: time="2024-09-04T17:30:22.248894703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249046 containerd[1440]: time="2024-09-04T17:30:22.248943034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249229 containerd[1440]: time="2024-09-04T17:30:22.249202281Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249229 containerd[1440]: time="2024-09-04T17:30:22.249223360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:30:22.249346 containerd[1440]: time="2024-09-04T17:30:22.249329960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249416 containerd[1440]: time="2024-09-04T17:30:22.249400743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249436 containerd[1440]: time="2024-09-04T17:30:22.249415831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249542 containerd[1440]: time="2024-09-04T17:30:22.249527331Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249856 containerd[1440]: time="2024-09-04T17:30:22.249838194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:30:22.249881 containerd[1440]: time="2024-09-04T17:30:22.249859504Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:30:22.249881 containerd[1440]: time="2024-09-04T17:30:22.249869353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:30:22.250015 containerd[1440]: time="2024-09-04T17:30:22.249996161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:30:22.250015 containerd[1440]: time="2024-09-04T17:30:22.250013413Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:30:22.250093 containerd[1440]: time="2024-09-04T17:30:22.250078004Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:30:22.250115 containerd[1440]: time="2024-09-04T17:30:22.250091900Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:30:22.260244 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:30:22.260244 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:30:22.260244 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:30:22.264037 extend-filesystems[1420]: Resized filesystem in /dev/vda9 Sep 4 17:30:22.266350 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:30:22.266607 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:30:22.350249 tar[1437]: linux-amd64/LICENSE Sep 4 17:30:22.350709 tar[1437]: linux-amd64/README.md Sep 4 17:30:22.366022 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:30:22.507925 bash[1472]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:30:22.509929 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:30:22.511944 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:30:22.541133 containerd[1440]: time="2024-09-04T17:30:22.541082477Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:30:22.541180 containerd[1440]: time="2024-09-04T17:30:22.541135376Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:30:22.541180 containerd[1440]: time="2024-09-04T17:30:22.541150044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:30:22.541216 containerd[1440]: time="2024-09-04T17:30:22.541187624Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:30:22.541216 containerd[1440]: time="2024-09-04T17:30:22.541203294Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:30:22.541286 containerd[1440]: time="2024-09-04T17:30:22.541216198Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:30:22.541286 containerd[1440]: time="2024-09-04T17:30:22.541247817Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:30:22.541444 containerd[1440]: time="2024-09-04T17:30:22.541407828Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:30:22.541466 containerd[1440]: time="2024-09-04T17:30:22.541435660Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:30:22.541466 containerd[1440]: time="2024-09-04T17:30:22.541455808Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:30:22.541507 containerd[1440]: time="2024-09-04T17:30:22.541469393Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:30:22.541507 containerd[1440]: time="2024-09-04T17:30:22.541483901Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:30:22.541548 containerd[1440]: time="2024-09-04T17:30:22.541506763Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:30:22.541548 containerd[1440]: time="2024-09-04T17:30:22.541524006Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:30:22.541548 containerd[1440]: time="2024-09-04T17:30:22.541540256Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:30:22.541597 containerd[1440]: time="2024-09-04T17:30:22.541554042Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:30:22.541597 containerd[1440]: time="2024-09-04T17:30:22.541566666Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:30:22.541597 containerd[1440]: time="2024-09-04T17:30:22.541589018Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:30:22.541656 containerd[1440]: time="2024-09-04T17:30:22.541606240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:30:22.541817 containerd[1440]: time="2024-09-04T17:30:22.541794213Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:30:22.542111 containerd[1440]: time="2024-09-04T17:30:22.542074319Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:30:22.542111 containerd[1440]: time="2024-09-04T17:30:22.542115586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542287 containerd[1440]: time="2024-09-04T17:30:22.542135874Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:30:22.542287 containerd[1440]: time="2024-09-04T17:30:22.542192170Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:30:22.542287 containerd[1440]: time="2024-09-04T17:30:22.542263534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542287 containerd[1440]: time="2024-09-04T17:30:22.542283040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542391 containerd[1440]: time="2024-09-04T17:30:22.542298279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542391 containerd[1440]: time="2024-09-04T17:30:22.542325370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542391 containerd[1440]: time="2024-09-04T17:30:22.542342432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542391 containerd[1440]: time="2024-09-04T17:30:22.542357741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542391 containerd[1440]: time="2024-09-04T17:30:22.542378099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542498 containerd[1440]: time="2024-09-04T17:30:22.542395031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542498 containerd[1440]: time="2024-09-04T17:30:22.542412052Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:30:22.542673 containerd[1440]: time="2024-09-04T17:30:22.542638297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542673 containerd[1440]: time="2024-09-04T17:30:22.542666600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542716 containerd[1440]: time="2024-09-04T17:30:22.542680616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542716 containerd[1440]: time="2024-09-04T17:30:22.542695945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542716 containerd[1440]: time="2024-09-04T17:30:22.542711915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542787 containerd[1440]: time="2024-09-04T17:30:22.542730250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542787 containerd[1440]: time="2024-09-04T17:30:22.542745548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.542787 containerd[1440]: time="2024-09-04T17:30:22.542759374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:30:22.543124 containerd[1440]: time="2024-09-04T17:30:22.543064056Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:30:22.543124 containerd[1440]: time="2024-09-04T17:30:22.543120101Z" level=info msg="Connect containerd service" Sep 4 17:30:22.543124 containerd[1440]: time="2024-09-04T17:30:22.543143625Z" level=info msg="using legacy CRI server" Sep 4 17:30:22.543124 containerd[1440]: time="2024-09-04T17:30:22.543150188Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:30:22.543412 containerd[1440]: time="2024-09-04T17:30:22.543225469Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:30:22.543954 containerd[1440]: time="2024-09-04T17:30:22.543923048Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:30:22.544003 containerd[1440]: time="2024-09-04T17:30:22.543982009Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:30:22.544025 containerd[1440]: time="2024-09-04T17:30:22.544007767Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:30:22.544025 containerd[1440]: time="2024-09-04T17:30:22.544022375Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:30:22.544082 containerd[1440]: time="2024-09-04T17:30:22.544025491Z" level=info msg="Start subscribing containerd event" Sep 4 17:30:22.544289 containerd[1440]: time="2024-09-04T17:30:22.544104308Z" level=info msg="Start recovering state" Sep 4 17:30:22.544289 containerd[1440]: time="2024-09-04T17:30:22.544167968Z" level=info msg="Start event monitor" Sep 4 17:30:22.544289 containerd[1440]: time="2024-09-04T17:30:22.544184409Z" level=info msg="Start snapshots syncer" Sep 4 17:30:22.544289 containerd[1440]: time="2024-09-04T17:30:22.544192774Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:30:22.544289 containerd[1440]: time="2024-09-04T17:30:22.544199888Z" level=info msg="Start streaming server" Sep 4 17:30:22.544377 containerd[1440]: time="2024-09-04T17:30:22.544034878Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:30:22.544580 containerd[1440]: time="2024-09-04T17:30:22.544552960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:30:22.544630 containerd[1440]: time="2024-09-04T17:30:22.544614225Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:30:22.545002 containerd[1440]: time="2024-09-04T17:30:22.544672024Z" level=info msg="containerd successfully booted in 0.517052s" Sep 4 17:30:22.544734 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:30:23.517205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:23.518991 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:30:23.520334 systemd[1]: Startup finished in 944ms (kernel) + 6.631s (initrd) + 5.084s (userspace) = 12.660s. Sep 4 17:30:23.543231 (kubelet)[1531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:24.319257 kubelet[1531]: E0904 17:30:24.319139 1531 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:24.324059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:24.324281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:24.324684 systemd[1]: kubelet.service: Consumed 1.820s CPU time. Sep 4 17:30:30.880507 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:30:30.881889 systemd[1]: Started sshd@0-10.0.0.161:22-10.0.0.1:51948.service - OpenSSH per-connection server daemon (10.0.0.1:51948). Sep 4 17:30:30.931552 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 51948 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:30:30.933378 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:30.943694 systemd-logind[1428]: New session 1 of user core. Sep 4 17:30:30.945630 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:30:30.959065 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:30:30.973202 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:30:30.984015 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:30:30.987465 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:31.113349 systemd[1550]: Queued start job for default target default.target. Sep 4 17:30:31.123167 systemd[1550]: Created slice app.slice - User Application Slice. Sep 4 17:30:31.123196 systemd[1550]: Reached target paths.target - Paths. Sep 4 17:30:31.123211 systemd[1550]: Reached target timers.target - Timers. Sep 4 17:30:31.124889 systemd[1550]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:30:31.137023 systemd[1550]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:30:31.137212 systemd[1550]: Reached target sockets.target - Sockets. Sep 4 17:30:31.137246 systemd[1550]: Reached target basic.target - Basic System. Sep 4 17:30:31.137303 systemd[1550]: Reached target default.target - Main User Target. Sep 4 17:30:31.137350 systemd[1550]: Startup finished in 142ms. Sep 4 17:30:31.137622 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:30:31.139415 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:30:31.203649 systemd[1]: Started sshd@1-10.0.0.161:22-10.0.0.1:51954.service - OpenSSH per-connection server daemon (10.0.0.1:51954). Sep 4 17:30:31.246943 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 51954 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:30:31.248544 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:31.252546 systemd-logind[1428]: New session 2 of user core. Sep 4 17:30:31.260898 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:30:31.315188 sshd[1561]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:31.323424 systemd[1]: sshd@1-10.0.0.161:22-10.0.0.1:51954.service: Deactivated successfully. Sep 4 17:30:31.325099 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:30:31.326832 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:30:31.342147 systemd[1]: Started sshd@2-10.0.0.161:22-10.0.0.1:51964.service - OpenSSH per-connection server daemon (10.0.0.1:51964). Sep 4 17:30:31.343058 systemd-logind[1428]: Removed session 2. Sep 4 17:30:31.374453 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 51964 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:30:31.375879 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:31.379521 systemd-logind[1428]: New session 3 of user core. Sep 4 17:30:31.388886 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:30:31.438483 sshd[1568]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:31.454226 systemd[1]: sshd@2-10.0.0.161:22-10.0.0.1:51964.service: Deactivated successfully. Sep 4 17:30:31.456061 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:30:31.457958 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:30:31.459140 systemd[1]: Started sshd@3-10.0.0.161:22-10.0.0.1:51968.service - OpenSSH per-connection server daemon (10.0.0.1:51968). Sep 4 17:30:31.459828 systemd-logind[1428]: Removed session 3. Sep 4 17:30:31.497333 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 51968 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:30:31.498812 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:31.502422 systemd-logind[1428]: New session 4 of user core. Sep 4 17:30:31.511888 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:30:31.567457 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:31.579557 systemd[1]: sshd@3-10.0.0.161:22-10.0.0.1:51968.service: Deactivated successfully. Sep 4 17:30:31.581494 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:30:31.583289 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:30:31.584588 systemd[1]: Started sshd@4-10.0.0.161:22-10.0.0.1:51974.service - OpenSSH per-connection server daemon (10.0.0.1:51974). Sep 4 17:30:31.585461 systemd-logind[1428]: Removed session 4. Sep 4 17:30:31.623684 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 51974 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:30:31.625246 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:31.629250 systemd-logind[1428]: New session 5 of user core. Sep 4 17:30:31.644899 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:30:31.704444 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:30:31.704762 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:31.720222 sudo[1586]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:31.722086 sshd[1583]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:31.737689 systemd[1]: sshd@4-10.0.0.161:22-10.0.0.1:51974.service: Deactivated successfully. Sep 4 17:30:31.739606 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:30:31.741175 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:30:31.752008 systemd[1]: Started sshd@5-10.0.0.161:22-10.0.0.1:51978.service - OpenSSH per-connection server daemon (10.0.0.1:51978). Sep 4 17:30:31.752839 systemd-logind[1428]: Removed session 5. Sep 4 17:30:31.786296 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 51978 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:30:31.787788 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:31.791676 systemd-logind[1428]: New session 6 of user core. Sep 4 17:30:31.801890 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:30:31.856945 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:30:31.857348 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:31.861069 sudo[1596]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:31.867320 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:30:31.867623 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:31.887998 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:30:31.889701 auditctl[1599]: No rules Sep 4 17:30:31.891052 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:30:31.891361 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:30:31.893187 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:30:31.925180 augenrules[1617]: No rules Sep 4 17:30:31.927272 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:30:31.928710 sudo[1595]: pam_unix(sudo:session): session closed for user root Sep 4 17:30:31.930916 sshd[1591]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:31.943872 systemd[1]: sshd@5-10.0.0.161:22-10.0.0.1:51978.service: Deactivated successfully. Sep 4 17:30:31.945754 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:30:31.947579 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:30:31.958031 systemd[1]: Started sshd@6-10.0.0.161:22-10.0.0.1:51994.service - OpenSSH per-connection server daemon (10.0.0.1:51994). Sep 4 17:30:31.958872 systemd-logind[1428]: Removed session 6. Sep 4 17:30:31.991929 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 51994 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:30:31.993565 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:30:31.997942 systemd-logind[1428]: New session 7 of user core. Sep 4 17:30:32.012886 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:30:32.066452 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:30:32.066764 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:30:32.180013 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:30:32.180185 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:30:32.717549 dockerd[1638]: time="2024-09-04T17:30:32.717467399Z" level=info msg="Starting up" Sep 4 17:30:32.755992 systemd[1]: var-lib-docker-metacopy\x2dcheck3596682473-merged.mount: Deactivated successfully. Sep 4 17:30:32.779011 dockerd[1638]: time="2024-09-04T17:30:32.778941422Z" level=info msg="Loading containers: start." Sep 4 17:30:32.899793 kernel: Initializing XFRM netlink socket Sep 4 17:30:32.985294 systemd-networkd[1374]: docker0: Link UP Sep 4 17:30:33.009503 dockerd[1638]: time="2024-09-04T17:30:33.009448354Z" level=info msg="Loading containers: done." Sep 4 17:30:33.078930 dockerd[1638]: time="2024-09-04T17:30:33.078879932Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:30:33.079135 dockerd[1638]: time="2024-09-04T17:30:33.079085608Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:30:33.079234 dockerd[1638]: time="2024-09-04T17:30:33.079210743Z" level=info msg="Daemon has completed initialization" Sep 4 17:30:33.112736 dockerd[1638]: time="2024-09-04T17:30:33.112665581Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:30:33.112888 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:30:33.744794 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3786687032-merged.mount: Deactivated successfully. Sep 4 17:30:34.035750 containerd[1440]: time="2024-09-04T17:30:34.035706130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:30:34.518587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:30:34.528037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:34.719192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:34.725565 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:34.875147 kubelet[1789]: E0904 17:30:34.874938 1789 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:34.883214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:34.883447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:34.990056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1184048814.mount: Deactivated successfully. Sep 4 17:30:36.365355 containerd[1440]: time="2024-09-04T17:30:36.365300208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:36.366205 containerd[1440]: time="2024-09-04T17:30:36.366172836Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232949" Sep 4 17:30:36.367516 containerd[1440]: time="2024-09-04T17:30:36.367468899Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:36.370148 containerd[1440]: time="2024-09-04T17:30:36.370119683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:36.371318 containerd[1440]: time="2024-09-04T17:30:36.371277176Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 2.335527755s" Sep 4 17:30:36.371367 containerd[1440]: time="2024-09-04T17:30:36.371320547Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 17:30:36.396076 containerd[1440]: time="2024-09-04T17:30:36.396024922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:30:39.628311 containerd[1440]: time="2024-09-04T17:30:39.628224207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:39.629163 containerd[1440]: time="2024-09-04T17:30:39.629083600Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206206" Sep 4 17:30:39.630493 containerd[1440]: time="2024-09-04T17:30:39.630440556Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:39.633523 containerd[1440]: time="2024-09-04T17:30:39.633490440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:39.634738 containerd[1440]: time="2024-09-04T17:30:39.634702285Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 3.238632719s" Sep 4 17:30:39.634793 containerd[1440]: time="2024-09-04T17:30:39.634739384Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 17:30:39.662794 containerd[1440]: time="2024-09-04T17:30:39.662724146Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:30:41.166746 containerd[1440]: time="2024-09-04T17:30:41.166655522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:41.167782 containerd[1440]: time="2024-09-04T17:30:41.167697758Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321507" Sep 4 17:30:41.169236 containerd[1440]: time="2024-09-04T17:30:41.169206510Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:41.172013 containerd[1440]: time="2024-09-04T17:30:41.171975146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:41.172931 containerd[1440]: time="2024-09-04T17:30:41.172876177Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.510111596s" Sep 4 17:30:41.172931 containerd[1440]: time="2024-09-04T17:30:41.172909410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 17:30:41.204728 containerd[1440]: time="2024-09-04T17:30:41.204665509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:30:42.412202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143756770.mount: Deactivated successfully. Sep 4 17:30:44.162662 containerd[1440]: time="2024-09-04T17:30:44.162552799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:44.218803 containerd[1440]: time="2024-09-04T17:30:44.218714061Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600380" Sep 4 17:30:44.291036 containerd[1440]: time="2024-09-04T17:30:44.290944923Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:44.363150 containerd[1440]: time="2024-09-04T17:30:44.363057433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:44.363677 containerd[1440]: time="2024-09-04T17:30:44.363620209Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 3.158910717s" Sep 4 17:30:44.363677 containerd[1440]: time="2024-09-04T17:30:44.363653672Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 17:30:44.389640 containerd[1440]: time="2024-09-04T17:30:44.389601961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:30:45.018694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:30:45.027962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:45.177168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:45.182383 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:30:45.240336 kubelet[1898]: E0904 17:30:45.240184 1898 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:30:45.245230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:30:45.245460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:30:46.325611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569350641.mount: Deactivated successfully. Sep 4 17:30:47.603108 containerd[1440]: time="2024-09-04T17:30:47.603026821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:47.603844 containerd[1440]: time="2024-09-04T17:30:47.603762281Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 17:30:47.605234 containerd[1440]: time="2024-09-04T17:30:47.605203866Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:47.608289 containerd[1440]: time="2024-09-04T17:30:47.608218083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:47.609476 containerd[1440]: time="2024-09-04T17:30:47.609443083Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.219805073s" Sep 4 17:30:47.609534 containerd[1440]: time="2024-09-04T17:30:47.609477187Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:30:47.633210 containerd[1440]: time="2024-09-04T17:30:47.633176024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:30:48.149065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451975031.mount: Deactivated successfully. Sep 4 17:30:48.154225 containerd[1440]: time="2024-09-04T17:30:48.154177903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:48.154886 containerd[1440]: time="2024-09-04T17:30:48.154825699Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:30:48.155964 containerd[1440]: time="2024-09-04T17:30:48.155928498Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:48.158052 containerd[1440]: time="2024-09-04T17:30:48.158017228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:48.158750 containerd[1440]: time="2024-09-04T17:30:48.158714517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 525.506062ms" Sep 4 17:30:48.158803 containerd[1440]: time="2024-09-04T17:30:48.158748351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:30:48.184508 containerd[1440]: time="2024-09-04T17:30:48.184471528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:30:48.747170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660668564.mount: Deactivated successfully. Sep 4 17:30:50.769570 containerd[1440]: time="2024-09-04T17:30:50.769463824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:50.784408 containerd[1440]: time="2024-09-04T17:30:50.784335535Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:30:50.800177 containerd[1440]: time="2024-09-04T17:30:50.800111623Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:50.815020 containerd[1440]: time="2024-09-04T17:30:50.814977673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:30:50.816115 containerd[1440]: time="2024-09-04T17:30:50.816044055Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.631532021s" Sep 4 17:30:50.816115 containerd[1440]: time="2024-09-04T17:30:50.816098647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:30:53.316085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:53.332975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:53.349912 systemd[1]: Reloading requested from client PID 2094 ('systemctl') (unit session-7.scope)... Sep 4 17:30:53.349929 systemd[1]: Reloading... Sep 4 17:30:53.432800 zram_generator::config[2131]: No configuration found. Sep 4 17:30:53.656021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:30:53.731945 systemd[1]: Reloading finished in 381 ms. Sep 4 17:30:53.788919 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:30:53.789018 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:30:53.789280 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:53.792002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:30:53.939650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:30:53.945583 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:30:53.992764 kubelet[2180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:30:53.993355 kubelet[2180]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:30:53.993355 kubelet[2180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:30:53.994258 kubelet[2180]: I0904 17:30:53.994193 2180 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:30:54.282866 kubelet[2180]: I0904 17:30:54.282828 2180 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:30:54.282866 kubelet[2180]: I0904 17:30:54.282859 2180 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:30:54.283114 kubelet[2180]: I0904 17:30:54.283093 2180 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:30:54.299235 kubelet[2180]: E0904 17:30:54.299206 2180 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.299838 kubelet[2180]: I0904 17:30:54.299815 2180 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:30:54.310378 kubelet[2180]: I0904 17:30:54.310350 2180 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:30:54.311418 kubelet[2180]: I0904 17:30:54.311389 2180 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:30:54.311988 kubelet[2180]: I0904 17:30:54.311856 2180 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:30:54.312352 kubelet[2180]: I0904 17:30:54.312335 2180 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:30:54.312422 kubelet[2180]: I0904 17:30:54.312411 2180 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:30:54.312593 kubelet[2180]: I0904 17:30:54.312581 2180 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:30:54.313040 kubelet[2180]: I0904 17:30:54.312760 2180 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:30:54.313040 kubelet[2180]: I0904 17:30:54.312799 2180 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:30:54.313040 kubelet[2180]: I0904 17:30:54.312840 2180 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:30:54.313040 kubelet[2180]: I0904 17:30:54.312860 2180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:30:54.314748 kubelet[2180]: W0904 17:30:54.314686 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.314824 kubelet[2180]: E0904 17:30:54.314763 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.314945 kubelet[2180]: I0904 17:30:54.314926 2180 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:30:54.317216 kubelet[2180]: W0904 17:30:54.317176 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.317303 kubelet[2180]: E0904 17:30:54.317232 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.319179 kubelet[2180]: I0904 17:30:54.319152 2180 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:30:54.320303 kubelet[2180]: W0904 17:30:54.320277 2180 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:30:54.321312 kubelet[2180]: I0904 17:30:54.320963 2180 server.go:1256] "Started kubelet" Sep 4 17:30:54.321312 kubelet[2180]: I0904 17:30:54.321070 2180 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:30:54.321312 kubelet[2180]: I0904 17:30:54.321252 2180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:30:54.321977 kubelet[2180]: I0904 17:30:54.321643 2180 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:30:54.322151 kubelet[2180]: I0904 17:30:54.322130 2180 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:30:54.322559 kubelet[2180]: I0904 17:30:54.322532 2180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:30:54.324290 kubelet[2180]: E0904 17:30:54.324249 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:54.324290 kubelet[2180]: I0904 17:30:54.324287 2180 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:30:54.324434 kubelet[2180]: I0904 17:30:54.324366 2180 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:30:54.324434 kubelet[2180]: I0904 17:30:54.324415 2180 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:30:54.325914 kubelet[2180]: W0904 17:30:54.324678 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.325914 kubelet[2180]: E0904 17:30:54.325211 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.326036 kubelet[2180]: E0904 17:30:54.325949 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.161:6443: connect: connection refused" interval="200ms" Sep 4 17:30:54.326496 kubelet[2180]: I0904 17:30:54.326469 2180 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:30:54.326552 kubelet[2180]: I0904 17:30:54.326544 2180 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:30:54.327551 kubelet[2180]: E0904 17:30:54.327529 2180 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:30:54.327704 kubelet[2180]: I0904 17:30:54.327689 2180 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:30:54.368414 kubelet[2180]: E0904 17:30:54.368346 2180 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.161:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.161:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21ac82f049df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:30:54.320934393 +0000 UTC m=+0.370130147,LastTimestamp:2024-09-04 17:30:54.320934393 +0000 UTC m=+0.370130147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:30:54.376671 kubelet[2180]: I0904 17:30:54.376643 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:30:54.378189 kubelet[2180]: I0904 17:30:54.378171 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:30:54.378247 kubelet[2180]: I0904 17:30:54.378210 2180 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:30:54.378247 kubelet[2180]: I0904 17:30:54.378229 2180 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:30:54.378294 kubelet[2180]: E0904 17:30:54.378285 2180 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:30:54.379916 kubelet[2180]: W0904 17:30:54.379881 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.380090 kubelet[2180]: E0904 17:30:54.379989 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:54.383740 kubelet[2180]: I0904 17:30:54.383709 2180 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:30:54.383740 kubelet[2180]: I0904 17:30:54.383734 2180 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:30:54.383836 kubelet[2180]: I0904 17:30:54.383753 2180 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:30:54.425804 kubelet[2180]: I0904 17:30:54.425785 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:54.426207 kubelet[2180]: E0904 17:30:54.426183 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.161:6443/api/v1/nodes\": dial tcp 10.0.0.161:6443: connect: connection refused" node="localhost" Sep 4 17:30:54.479353 kubelet[2180]: E0904 17:30:54.479304 2180 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:30:54.527030 kubelet[2180]: E0904 17:30:54.526997 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.161:6443: connect: connection refused" interval="400ms" Sep 4 17:30:54.627473 kubelet[2180]: I0904 17:30:54.627352 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:54.627743 kubelet[2180]: E0904 17:30:54.627710 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.161:6443/api/v1/nodes\": dial tcp 10.0.0.161:6443: connect: connection refused" node="localhost" Sep 4 17:30:54.647352 kubelet[2180]: I0904 17:30:54.647323 2180 policy_none.go:49] "None policy: Start" Sep 4 17:30:54.647958 kubelet[2180]: I0904 17:30:54.647938 2180 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:30:54.648016 kubelet[2180]: I0904 17:30:54.647960 2180 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:30:54.653621 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:30:54.667638 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:30:54.670584 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:30:54.679419 kubelet[2180]: E0904 17:30:54.679393 2180 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:30:54.679644 kubelet[2180]: I0904 17:30:54.679602 2180 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:30:54.679990 kubelet[2180]: I0904 17:30:54.679926 2180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:30:54.681366 kubelet[2180]: E0904 17:30:54.681347 2180 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:30:54.927783 kubelet[2180]: E0904 17:30:54.927637 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.161:6443: connect: connection refused" interval="800ms" Sep 4 17:30:55.029097 kubelet[2180]: I0904 17:30:55.029071 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:55.029484 kubelet[2180]: E0904 17:30:55.029387 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.161:6443/api/v1/nodes\": dial tcp 10.0.0.161:6443: connect: connection refused" node="localhost" Sep 4 17:30:55.080518 kubelet[2180]: I0904 17:30:55.080482 2180 topology_manager.go:215] "Topology Admit Handler" podUID="deb7a428322c1e6f70b45ddf456f7fe0" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:30:55.082928 kubelet[2180]: I0904 17:30:55.082903 2180 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:30:55.083829 kubelet[2180]: I0904 17:30:55.083811 2180 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:30:55.092551 systemd[1]: Created slice kubepods-burstable-poddeb7a428322c1e6f70b45ddf456f7fe0.slice - libcontainer container kubepods-burstable-poddeb7a428322c1e6f70b45ddf456f7fe0.slice. Sep 4 17:30:55.107279 systemd[1]: Created slice kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice - libcontainer container kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice. Sep 4 17:30:55.110978 systemd[1]: Created slice kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice - libcontainer container kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice. Sep 4 17:30:55.128817 kubelet[2180]: I0904 17:30:55.128792 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:55.128915 kubelet[2180]: I0904 17:30:55.128830 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:55.128915 kubelet[2180]: I0904 17:30:55.128852 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:55.128915 kubelet[2180]: I0904 17:30:55.128872 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/deb7a428322c1e6f70b45ddf456f7fe0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"deb7a428322c1e6f70b45ddf456f7fe0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:55.128915 kubelet[2180]: I0904 17:30:55.128892 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:55.129064 kubelet[2180]: I0904 17:30:55.128931 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:30:55.129064 kubelet[2180]: I0904 17:30:55.128950 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:30:55.129064 kubelet[2180]: I0904 17:30:55.128983 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/deb7a428322c1e6f70b45ddf456f7fe0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"deb7a428322c1e6f70b45ddf456f7fe0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:55.129064 kubelet[2180]: I0904 17:30:55.129017 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/deb7a428322c1e6f70b45ddf456f7fe0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"deb7a428322c1e6f70b45ddf456f7fe0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:30:55.407897 kubelet[2180]: E0904 17:30:55.407846 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:55.408400 containerd[1440]: time="2024-09-04T17:30:55.408361511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:deb7a428322c1e6f70b45ddf456f7fe0,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:55.409544 kubelet[2180]: E0904 17:30:55.409509 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:55.409986 containerd[1440]: time="2024-09-04T17:30:55.409953636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:55.415186 kubelet[2180]: E0904 17:30:55.415160 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:55.415419 containerd[1440]: time="2024-09-04T17:30:55.415392049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:55.416996 kubelet[2180]: W0904 17:30:55.416912 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:55.417033 kubelet[2180]: E0904 17:30:55.416999 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:55.556117 kubelet[2180]: W0904 17:30:55.556043 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:55.556117 kubelet[2180]: E0904 17:30:55.556107 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:55.728651 kubelet[2180]: E0904 17:30:55.728519 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.161:6443: connect: connection refused" interval="1.6s" Sep 4 17:30:55.833163 kubelet[2180]: I0904 17:30:55.833122 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:55.833563 kubelet[2180]: E0904 17:30:55.833532 2180 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.161:6443/api/v1/nodes\": dial tcp 10.0.0.161:6443: connect: connection refused" node="localhost" Sep 4 17:30:55.847009 kubelet[2180]: W0904 17:30:55.846956 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:55.847043 kubelet[2180]: E0904 17:30:55.847012 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:55.874620 kubelet[2180]: W0904 17:30:55.874545 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:55.874620 kubelet[2180]: E0904 17:30:55.874618 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:55.948819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429070928.mount: Deactivated successfully. Sep 4 17:30:56.354708 kubelet[2180]: E0904 17:30:56.354642 2180 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:56.445788 containerd[1440]: time="2024-09-04T17:30:56.445721285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:56.446820 containerd[1440]: time="2024-09-04T17:30:56.446785975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:56.447521 containerd[1440]: time="2024-09-04T17:30:56.447475464Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:30:56.448474 containerd[1440]: time="2024-09-04T17:30:56.448437944Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:56.449443 containerd[1440]: time="2024-09-04T17:30:56.449413056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:30:56.450282 containerd[1440]: time="2024-09-04T17:30:56.450250522Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:56.451063 containerd[1440]: time="2024-09-04T17:30:56.450983762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:30:56.454053 containerd[1440]: time="2024-09-04T17:30:56.454019987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:30:56.456240 containerd[1440]: time="2024-09-04T17:30:56.456203849Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.047742361s" Sep 4 17:30:56.457120 containerd[1440]: time="2024-09-04T17:30:56.457089305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.041640199s" Sep 4 17:30:56.457845 containerd[1440]: time="2024-09-04T17:30:56.457796646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.047745368s" Sep 4 17:30:56.769526 containerd[1440]: time="2024-09-04T17:30:56.769313580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:56.769526 containerd[1440]: time="2024-09-04T17:30:56.769377159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:56.770390 containerd[1440]: time="2024-09-04T17:30:56.770344497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:56.770390 containerd[1440]: time="2024-09-04T17:30:56.770367179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:56.770994 containerd[1440]: time="2024-09-04T17:30:56.770919692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:56.771062 containerd[1440]: time="2024-09-04T17:30:56.770986057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:56.771062 containerd[1440]: time="2024-09-04T17:30:56.771010582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:56.771062 containerd[1440]: time="2024-09-04T17:30:56.771028295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:56.780724 containerd[1440]: time="2024-09-04T17:30:56.780401295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:56.781041 containerd[1440]: time="2024-09-04T17:30:56.780988573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:56.781134 containerd[1440]: time="2024-09-04T17:30:56.781101605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:56.781252 containerd[1440]: time="2024-09-04T17:30:56.781204858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:56.807923 systemd[1]: Started cri-containerd-78b2d237b2385a91bf12e8276f7ad4986f309a969a1f3eaf622d64b34291e6b5.scope - libcontainer container 78b2d237b2385a91bf12e8276f7ad4986f309a969a1f3eaf622d64b34291e6b5. Sep 4 17:30:56.812608 systemd[1]: Started cri-containerd-afd150897d49c7d6f97083d884f4f5b2f1760742b5ddbcb00bf548910a44a400.scope - libcontainer container afd150897d49c7d6f97083d884f4f5b2f1760742b5ddbcb00bf548910a44a400. Sep 4 17:30:56.817440 systemd[1]: Started cri-containerd-fc9ed76626084731493afc887dc2de43b710d010bce2433a39e94b4db1a5c232.scope - libcontainer container fc9ed76626084731493afc887dc2de43b710d010bce2433a39e94b4db1a5c232. Sep 4 17:30:56.869604 containerd[1440]: time="2024-09-04T17:30:56.869559168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"afd150897d49c7d6f97083d884f4f5b2f1760742b5ddbcb00bf548910a44a400\"" Sep 4 17:30:56.870662 kubelet[2180]: E0904 17:30:56.870640 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:56.873598 containerd[1440]: time="2024-09-04T17:30:56.873533938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:deb7a428322c1e6f70b45ddf456f7fe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc9ed76626084731493afc887dc2de43b710d010bce2433a39e94b4db1a5c232\"" Sep 4 17:30:56.874379 kubelet[2180]: E0904 17:30:56.874312 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:56.874564 containerd[1440]: time="2024-09-04T17:30:56.874361014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"78b2d237b2385a91bf12e8276f7ad4986f309a969a1f3eaf622d64b34291e6b5\"" Sep 4 17:30:56.875015 kubelet[2180]: E0904 17:30:56.874990 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:56.876834 containerd[1440]: time="2024-09-04T17:30:56.876728479Z" level=info msg="CreateContainer within sandbox \"afd150897d49c7d6f97083d884f4f5b2f1760742b5ddbcb00bf548910a44a400\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:30:56.877402 containerd[1440]: time="2024-09-04T17:30:56.877197976Z" level=info msg="CreateContainer within sandbox \"78b2d237b2385a91bf12e8276f7ad4986f309a969a1f3eaf622d64b34291e6b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:30:56.877402 containerd[1440]: time="2024-09-04T17:30:56.877281673Z" level=info msg="CreateContainer within sandbox \"fc9ed76626084731493afc887dc2de43b710d010bce2433a39e94b4db1a5c232\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:30:56.902441 containerd[1440]: time="2024-09-04T17:30:56.902373765Z" level=info msg="CreateContainer within sandbox \"afd150897d49c7d6f97083d884f4f5b2f1760742b5ddbcb00bf548910a44a400\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"049c175c42e60b33ee5e8d8508c80e96bbf6e66732b64ea238ef8bf6060f3547\"" Sep 4 17:30:56.903235 containerd[1440]: time="2024-09-04T17:30:56.903195050Z" level=info msg="StartContainer for \"049c175c42e60b33ee5e8d8508c80e96bbf6e66732b64ea238ef8bf6060f3547\"" Sep 4 17:30:56.910395 containerd[1440]: time="2024-09-04T17:30:56.910337380Z" level=info msg="CreateContainer within sandbox \"fc9ed76626084731493afc887dc2de43b710d010bce2433a39e94b4db1a5c232\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4547f23a97de5997d64af815bdc0395ab7b46b2683156025c8dee5cf8d0e0077\"" Sep 4 17:30:56.910902 containerd[1440]: time="2024-09-04T17:30:56.910860659Z" level=info msg="StartContainer for \"4547f23a97de5997d64af815bdc0395ab7b46b2683156025c8dee5cf8d0e0077\"" Sep 4 17:30:56.911265 containerd[1440]: time="2024-09-04T17:30:56.911225610Z" level=info msg="CreateContainer within sandbox \"78b2d237b2385a91bf12e8276f7ad4986f309a969a1f3eaf622d64b34291e6b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2d0cb3f2e17ed0aaca8348c8a345a724ce6328bfa43d73b47f02f41568fe210b\"" Sep 4 17:30:56.911974 containerd[1440]: time="2024-09-04T17:30:56.911869594Z" level=info msg="StartContainer for \"2d0cb3f2e17ed0aaca8348c8a345a724ce6328bfa43d73b47f02f41568fe210b\"" Sep 4 17:30:56.934983 systemd[1]: Started cri-containerd-049c175c42e60b33ee5e8d8508c80e96bbf6e66732b64ea238ef8bf6060f3547.scope - libcontainer container 049c175c42e60b33ee5e8d8508c80e96bbf6e66732b64ea238ef8bf6060f3547. Sep 4 17:30:56.964940 systemd[1]: Started cri-containerd-2d0cb3f2e17ed0aaca8348c8a345a724ce6328bfa43d73b47f02f41568fe210b.scope - libcontainer container 2d0cb3f2e17ed0aaca8348c8a345a724ce6328bfa43d73b47f02f41568fe210b. Sep 4 17:30:56.966593 systemd[1]: Started cri-containerd-4547f23a97de5997d64af815bdc0395ab7b46b2683156025c8dee5cf8d0e0077.scope - libcontainer container 4547f23a97de5997d64af815bdc0395ab7b46b2683156025c8dee5cf8d0e0077. Sep 4 17:30:56.987273 containerd[1440]: time="2024-09-04T17:30:56.986687123Z" level=info msg="StartContainer for \"049c175c42e60b33ee5e8d8508c80e96bbf6e66732b64ea238ef8bf6060f3547\" returns successfully" Sep 4 17:30:57.042614 containerd[1440]: time="2024-09-04T17:30:57.042474399Z" level=info msg="StartContainer for \"2d0cb3f2e17ed0aaca8348c8a345a724ce6328bfa43d73b47f02f41568fe210b\" returns successfully" Sep 4 17:30:57.042614 containerd[1440]: time="2024-09-04T17:30:57.042562774Z" level=info msg="StartContainer for \"4547f23a97de5997d64af815bdc0395ab7b46b2683156025c8dee5cf8d0e0077\" returns successfully" Sep 4 17:30:57.083938 kubelet[2180]: W0904 17:30:57.083870 2180 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:57.083938 kubelet[2180]: E0904 17:30:57.083928 2180 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.161:6443: connect: connection refused Sep 4 17:30:57.391690 kubelet[2180]: E0904 17:30:57.391454 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:57.393288 kubelet[2180]: E0904 17:30:57.392552 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:57.394187 kubelet[2180]: E0904 17:30:57.394128 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:57.435686 kubelet[2180]: I0904 17:30:57.435280 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:30:58.367785 kubelet[2180]: E0904 17:30:58.367713 2180 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:30:58.396011 kubelet[2180]: E0904 17:30:58.395988 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:58.455481 kubelet[2180]: I0904 17:30:58.455336 2180 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:30:58.470089 kubelet[2180]: E0904 17:30:58.470049 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:58.570899 kubelet[2180]: E0904 17:30:58.570830 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:58.671529 kubelet[2180]: E0904 17:30:58.671350 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:58.771679 kubelet[2180]: E0904 17:30:58.771624 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:58.872502 kubelet[2180]: E0904 17:30:58.872434 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:58.973039 kubelet[2180]: E0904 17:30:58.972930 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.073665 kubelet[2180]: E0904 17:30:59.073611 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.174172 kubelet[2180]: E0904 17:30:59.174134 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.274659 kubelet[2180]: E0904 17:30:59.274625 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.375658 kubelet[2180]: E0904 17:30:59.375611 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.475943 kubelet[2180]: E0904 17:30:59.475829 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.576517 kubelet[2180]: E0904 17:30:59.576371 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.676907 kubelet[2180]: E0904 17:30:59.676860 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.777633 kubelet[2180]: E0904 17:30:59.777592 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.878455 kubelet[2180]: E0904 17:30:59.878306 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:30:59.978912 kubelet[2180]: E0904 17:30:59.978857 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.079471 kubelet[2180]: E0904 17:31:00.079400 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.180087 kubelet[2180]: E0904 17:31:00.179987 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.280637 kubelet[2180]: E0904 17:31:00.280594 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.381523 kubelet[2180]: E0904 17:31:00.381485 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.482215 kubelet[2180]: E0904 17:31:00.482078 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.582980 kubelet[2180]: E0904 17:31:00.582922 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.683515 kubelet[2180]: E0904 17:31:00.683465 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.784028 kubelet[2180]: E0904 17:31:00.783997 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.884671 kubelet[2180]: E0904 17:31:00.884632 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:00.935855 kubelet[2180]: E0904 17:31:00.935832 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:00.985721 kubelet[2180]: E0904 17:31:00.985676 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:01.026884 systemd[1]: Reloading requested from client PID 2461 ('systemctl') (unit session-7.scope)... Sep 4 17:31:01.026899 systemd[1]: Reloading... Sep 4 17:31:01.087974 kubelet[2180]: E0904 17:31:01.087839 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:01.105807 zram_generator::config[2498]: No configuration found. Sep 4 17:31:01.188385 kubelet[2180]: E0904 17:31:01.188342 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:01.210097 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:31:01.288865 kubelet[2180]: E0904 17:31:01.288830 2180 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:31:01.299411 systemd[1]: Reloading finished in 272 ms. Sep 4 17:31:01.342744 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:01.366188 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:31:01.366512 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:01.373975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:31:01.529138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:31:01.539074 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:31:01.586493 kubelet[2543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:31:01.586493 kubelet[2543]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:31:01.586493 kubelet[2543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:31:01.586844 kubelet[2543]: I0904 17:31:01.586541 2543 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:31:01.591566 kubelet[2543]: I0904 17:31:01.591540 2543 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:31:01.591566 kubelet[2543]: I0904 17:31:01.591562 2543 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:31:01.591790 kubelet[2543]: I0904 17:31:01.591762 2543 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:31:01.593231 kubelet[2543]: I0904 17:31:01.593166 2543 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:31:01.593425 sudo[2556]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:31:01.593935 sudo[2556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 4 17:31:01.595364 kubelet[2543]: I0904 17:31:01.595321 2543 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:31:01.606253 kubelet[2543]: I0904 17:31:01.606193 2543 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:31:01.606970 kubelet[2543]: I0904 17:31:01.606602 2543 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:31:01.606970 kubelet[2543]: I0904 17:31:01.606810 2543 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:31:01.606970 kubelet[2543]: I0904 17:31:01.606838 2543 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:31:01.606970 kubelet[2543]: I0904 17:31:01.606848 2543 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:31:01.606970 kubelet[2543]: I0904 17:31:01.606879 2543 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:31:01.607192 kubelet[2543]: I0904 17:31:01.607179 2543 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:31:01.607829 kubelet[2543]: I0904 17:31:01.607816 2543 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:31:01.607940 kubelet[2543]: I0904 17:31:01.607929 2543 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:31:01.610816 kubelet[2543]: I0904 17:31:01.610801 2543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:31:01.614559 kubelet[2543]: I0904 17:31:01.611657 2543 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:31:01.614559 kubelet[2543]: I0904 17:31:01.613061 2543 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:31:01.614559 kubelet[2543]: I0904 17:31:01.613832 2543 server.go:1256] "Started kubelet" Sep 4 17:31:01.614559 kubelet[2543]: I0904 17:31:01.614038 2543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:31:01.614559 kubelet[2543]: I0904 17:31:01.614212 2543 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:31:01.616067 kubelet[2543]: I0904 17:31:01.616018 2543 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:31:01.617698 kubelet[2543]: I0904 17:31:01.617658 2543 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:31:01.619800 kubelet[2543]: E0904 17:31:01.618821 2543 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:31:01.619800 kubelet[2543]: I0904 17:31:01.619627 2543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:31:01.619868 kubelet[2543]: I0904 17:31:01.619858 2543 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:31:01.620161 kubelet[2543]: I0904 17:31:01.620138 2543 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:31:01.621859 kubelet[2543]: I0904 17:31:01.621840 2543 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:31:01.624624 kubelet[2543]: I0904 17:31:01.624352 2543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:31:01.628082 kubelet[2543]: I0904 17:31:01.627103 2543 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:31:01.628082 kubelet[2543]: I0904 17:31:01.627992 2543 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:31:01.633708 kubelet[2543]: I0904 17:31:01.633670 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:31:01.637530 kubelet[2543]: I0904 17:31:01.637502 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:31:01.637573 kubelet[2543]: I0904 17:31:01.637542 2543 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:31:01.637573 kubelet[2543]: I0904 17:31:01.637561 2543 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:31:01.637628 kubelet[2543]: E0904 17:31:01.637613 2543 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:31:01.665424 kubelet[2543]: I0904 17:31:01.665397 2543 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:31:01.665424 kubelet[2543]: I0904 17:31:01.665415 2543 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:31:01.665513 kubelet[2543]: I0904 17:31:01.665432 2543 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:31:01.665579 kubelet[2543]: I0904 17:31:01.665562 2543 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:31:01.665603 kubelet[2543]: I0904 17:31:01.665595 2543 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:31:01.665603 kubelet[2543]: I0904 17:31:01.665603 2543 policy_none.go:49] "None policy: Start" Sep 4 17:31:01.666329 kubelet[2543]: I0904 17:31:01.666123 2543 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:31:01.666329 kubelet[2543]: I0904 17:31:01.666155 2543 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:31:01.666329 kubelet[2543]: I0904 17:31:01.666314 2543 state_mem.go:75] "Updated machine memory state" Sep 4 17:31:01.670324 kubelet[2543]: I0904 17:31:01.670301 2543 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:31:01.670850 kubelet[2543]: I0904 17:31:01.670637 2543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:31:01.738032 kubelet[2543]: I0904 17:31:01.737983 2543 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:31:01.738133 kubelet[2543]: I0904 17:31:01.738084 2543 topology_manager.go:215] "Topology Admit Handler" podUID="deb7a428322c1e6f70b45ddf456f7fe0" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:31:01.738172 kubelet[2543]: I0904 17:31:01.738148 2543 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:31:01.775911 kubelet[2543]: I0904 17:31:01.775871 2543 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:31:01.781191 kubelet[2543]: I0904 17:31:01.781169 2543 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:31:01.781249 kubelet[2543]: I0904 17:31:01.781239 2543 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:31:01.823253 kubelet[2543]: I0904 17:31:01.823223 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:31:01.823320 kubelet[2543]: I0904 17:31:01.823291 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:31:01.823320 kubelet[2543]: I0904 17:31:01.823317 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:31:01.823363 kubelet[2543]: I0904 17:31:01.823341 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/deb7a428322c1e6f70b45ddf456f7fe0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"deb7a428322c1e6f70b45ddf456f7fe0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:31:01.823406 kubelet[2543]: I0904 17:31:01.823364 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:31:01.823406 kubelet[2543]: I0904 17:31:01.823395 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:31:01.823462 kubelet[2543]: I0904 17:31:01.823418 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:31:01.823518 kubelet[2543]: I0904 17:31:01.823480 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/deb7a428322c1e6f70b45ddf456f7fe0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"deb7a428322c1e6f70b45ddf456f7fe0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:31:01.823518 kubelet[2543]: I0904 17:31:01.823520 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/deb7a428322c1e6f70b45ddf456f7fe0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"deb7a428322c1e6f70b45ddf456f7fe0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:31:02.046347 kubelet[2543]: E0904 17:31:02.046316 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:02.046626 kubelet[2543]: E0904 17:31:02.046543 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:02.047708 kubelet[2543]: E0904 17:31:02.047590 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:02.063909 sudo[2556]: pam_unix(sudo:session): session closed for user root Sep 4 17:31:02.612092 kubelet[2543]: I0904 17:31:02.612056 2543 apiserver.go:52] "Watching apiserver" Sep 4 17:31:02.621374 kubelet[2543]: I0904 17:31:02.621315 2543 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:31:02.650166 kubelet[2543]: E0904 17:31:02.650092 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:02.655159 kubelet[2543]: E0904 17:31:02.655130 2543 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:31:02.655237 kubelet[2543]: E0904 17:31:02.655169 2543 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 17:31:02.655426 kubelet[2543]: E0904 17:31:02.655399 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:02.655599 kubelet[2543]: E0904 17:31:02.655569 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:02.678056 kubelet[2543]: I0904 17:31:02.677841 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.677749516 podStartE2EDuration="1.677749516s" podCreationTimestamp="2024-09-04 17:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:02.671189919 +0000 UTC m=+1.127962892" watchObservedRunningTime="2024-09-04 17:31:02.677749516 +0000 UTC m=+1.134522489" Sep 4 17:31:02.687396 kubelet[2543]: I0904 17:31:02.686083 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6860406129999999 podStartE2EDuration="1.686040613s" podCreationTimestamp="2024-09-04 17:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:02.677968666 +0000 UTC m=+1.134741639" watchObservedRunningTime="2024-09-04 17:31:02.686040613 +0000 UTC m=+1.142813587" Sep 4 17:31:03.199033 sudo[1628]: pam_unix(sudo:session): session closed for user root Sep 4 17:31:03.200896 sshd[1625]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:03.205629 systemd[1]: sshd@6-10.0.0.161:22-10.0.0.1:51994.service: Deactivated successfully. Sep 4 17:31:03.207892 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:31:03.208094 systemd[1]: session-7.scope: Consumed 4.742s CPU time, 138.1M memory peak, 0B memory swap peak. Sep 4 17:31:03.208564 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:31:03.209695 systemd-logind[1428]: Removed session 7. Sep 4 17:31:03.651233 kubelet[2543]: E0904 17:31:03.651197 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:03.651713 kubelet[2543]: E0904 17:31:03.651535 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:05.946181 kubelet[2543]: E0904 17:31:05.946138 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:06.863787 update_engine[1430]: I0904 17:31:06.863704 1430 update_attempter.cc:509] Updating boot flags... Sep 4 17:31:07.044900 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2630) Sep 4 17:31:07.074024 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2632) Sep 4 17:31:07.111311 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2632) Sep 4 17:31:08.354842 kubelet[2543]: E0904 17:31:08.354807 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:08.422598 kubelet[2543]: I0904 17:31:08.422547 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.422483003 podStartE2EDuration="7.422483003s" podCreationTimestamp="2024-09-04 17:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:02.686197026 +0000 UTC m=+1.142969999" watchObservedRunningTime="2024-09-04 17:31:08.422483003 +0000 UTC m=+6.879255976" Sep 4 17:31:08.659021 kubelet[2543]: E0904 17:31:08.658918 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:08.724125 kubelet[2543]: E0904 17:31:08.724045 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:09.659708 kubelet[2543]: E0904 17:31:09.659672 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:14.344667 kubelet[2543]: I0904 17:31:14.344636 2543 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:31:14.345127 containerd[1440]: time="2024-09-04T17:31:14.345042769Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:31:14.345377 kubelet[2543]: I0904 17:31:14.345215 2543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:31:15.057630 kubelet[2543]: I0904 17:31:15.057570 2543 topology_manager.go:215] "Topology Admit Handler" podUID="299b76b0-7b31-4864-aa28-a32b26b791e4" podNamespace="kube-system" podName="kube-proxy-wj7fq" Sep 4 17:31:15.060830 kubelet[2543]: I0904 17:31:15.059439 2543 topology_manager.go:215] "Topology Admit Handler" podUID="cd1d19fc-16e8-4cb6-86b8-8997986e1264" podNamespace="kube-system" podName="cilium-hwxf8" Sep 4 17:31:15.071811 systemd[1]: Created slice kubepods-besteffort-pod299b76b0_7b31_4864_aa28_a32b26b791e4.slice - libcontainer container kubepods-besteffort-pod299b76b0_7b31_4864_aa28_a32b26b791e4.slice. Sep 4 17:31:15.081393 systemd[1]: Created slice kubepods-burstable-podcd1d19fc_16e8_4cb6_86b8_8997986e1264.slice - libcontainer container kubepods-burstable-podcd1d19fc_16e8_4cb6_86b8_8997986e1264.slice. Sep 4 17:31:15.107029 kubelet[2543]: I0904 17:31:15.106973 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/299b76b0-7b31-4864-aa28-a32b26b791e4-kube-proxy\") pod \"kube-proxy-wj7fq\" (UID: \"299b76b0-7b31-4864-aa28-a32b26b791e4\") " pod="kube-system/kube-proxy-wj7fq" Sep 4 17:31:15.107029 kubelet[2543]: I0904 17:31:15.107026 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-run\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.107029 kubelet[2543]: I0904 17:31:15.107046 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd1d19fc-16e8-4cb6-86b8-8997986e1264-hubble-tls\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.107234 kubelet[2543]: I0904 17:31:15.107064 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-hostproc\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.107234 kubelet[2543]: I0904 17:31:15.107087 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-host-proc-sys-kernel\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.107234 kubelet[2543]: I0904 17:31:15.107109 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cni-path\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.107234 kubelet[2543]: I0904 17:31:15.107168 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-xtables-lock\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.107234 kubelet[2543]: I0904 17:31:15.107211 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-host-proc-sys-net\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.107234 kubelet[2543]: I0904 17:31:15.107230 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-config-path\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.108647 kubelet[2543]: I0904 17:31:15.108621 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-lib-modules\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.108704 kubelet[2543]: I0904 17:31:15.108667 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcmx\" (UniqueName: \"kubernetes.io/projected/cd1d19fc-16e8-4cb6-86b8-8997986e1264-kube-api-access-qxcmx\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.108704 kubelet[2543]: I0904 17:31:15.108695 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/299b76b0-7b31-4864-aa28-a32b26b791e4-lib-modules\") pod \"kube-proxy-wj7fq\" (UID: \"299b76b0-7b31-4864-aa28-a32b26b791e4\") " pod="kube-system/kube-proxy-wj7fq" Sep 4 17:31:15.108751 kubelet[2543]: I0904 17:31:15.108719 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-cgroup\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.108751 kubelet[2543]: I0904 17:31:15.108744 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/299b76b0-7b31-4864-aa28-a32b26b791e4-xtables-lock\") pod \"kube-proxy-wj7fq\" (UID: \"299b76b0-7b31-4864-aa28-a32b26b791e4\") " pod="kube-system/kube-proxy-wj7fq" Sep 4 17:31:15.108825 kubelet[2543]: I0904 17:31:15.108797 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zwb5\" (UniqueName: \"kubernetes.io/projected/299b76b0-7b31-4864-aa28-a32b26b791e4-kube-api-access-2zwb5\") pod \"kube-proxy-wj7fq\" (UID: \"299b76b0-7b31-4864-aa28-a32b26b791e4\") " pod="kube-system/kube-proxy-wj7fq" Sep 4 17:31:15.108848 kubelet[2543]: I0904 17:31:15.108826 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-bpf-maps\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.108874 kubelet[2543]: I0904 17:31:15.108862 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-etc-cni-netd\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.108920 kubelet[2543]: I0904 17:31:15.108901 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd1d19fc-16e8-4cb6-86b8-8997986e1264-clustermesh-secrets\") pod \"cilium-hwxf8\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " pod="kube-system/cilium-hwxf8" Sep 4 17:31:15.380424 kubelet[2543]: E0904 17:31:15.380299 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:15.381095 containerd[1440]: time="2024-09-04T17:31:15.381049125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj7fq,Uid:299b76b0-7b31-4864-aa28-a32b26b791e4,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:15.384696 kubelet[2543]: E0904 17:31:15.384676 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:15.385098 containerd[1440]: time="2024-09-04T17:31:15.385067207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hwxf8,Uid:cd1d19fc-16e8-4cb6-86b8-8997986e1264,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:15.570294 kubelet[2543]: I0904 17:31:15.570242 2543 topology_manager.go:215] "Topology Admit Handler" podUID="020c5a8a-fe81-4412-b8af-e84894a8c192" podNamespace="kube-system" podName="cilium-operator-5cc964979-v7mb5" Sep 4 17:31:15.577810 systemd[1]: Created slice kubepods-besteffort-pod020c5a8a_fe81_4412_b8af_e84894a8c192.slice - libcontainer container kubepods-besteffort-pod020c5a8a_fe81_4412_b8af_e84894a8c192.slice. Sep 4 17:31:15.613888 kubelet[2543]: I0904 17:31:15.613853 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/020c5a8a-fe81-4412-b8af-e84894a8c192-cilium-config-path\") pod \"cilium-operator-5cc964979-v7mb5\" (UID: \"020c5a8a-fe81-4412-b8af-e84894a8c192\") " pod="kube-system/cilium-operator-5cc964979-v7mb5" Sep 4 17:31:15.614014 kubelet[2543]: I0904 17:31:15.613905 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwxg\" (UniqueName: \"kubernetes.io/projected/020c5a8a-fe81-4412-b8af-e84894a8c192-kube-api-access-7rwxg\") pod \"cilium-operator-5cc964979-v7mb5\" (UID: \"020c5a8a-fe81-4412-b8af-e84894a8c192\") " pod="kube-system/cilium-operator-5cc964979-v7mb5" Sep 4 17:31:15.880923 kubelet[2543]: E0904 17:31:15.880868 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:15.881709 containerd[1440]: time="2024-09-04T17:31:15.881190969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-v7mb5,Uid:020c5a8a-fe81-4412-b8af-e84894a8c192,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:15.947530 kubelet[2543]: E0904 17:31:15.947493 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:16.094784 containerd[1440]: time="2024-09-04T17:31:16.094674430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:16.094784 containerd[1440]: time="2024-09-04T17:31:16.094737547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:16.094784 containerd[1440]: time="2024-09-04T17:31:16.094752946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:16.094976 containerd[1440]: time="2024-09-04T17:31:16.094761983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:16.115981 systemd[1]: Started cri-containerd-5e9b394dbeeb5c929cd6d93503f4f9dc3edc7a3903ad7891934bb52e4cc4d833.scope - libcontainer container 5e9b394dbeeb5c929cd6d93503f4f9dc3edc7a3903ad7891934bb52e4cc4d833. Sep 4 17:31:16.140674 containerd[1440]: time="2024-09-04T17:31:16.140317116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj7fq,Uid:299b76b0-7b31-4864-aa28-a32b26b791e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e9b394dbeeb5c929cd6d93503f4f9dc3edc7a3903ad7891934bb52e4cc4d833\"" Sep 4 17:31:16.140901 kubelet[2543]: E0904 17:31:16.140878 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:16.143199 containerd[1440]: time="2024-09-04T17:31:16.143168842Z" level=info msg="CreateContainer within sandbox \"5e9b394dbeeb5c929cd6d93503f4f9dc3edc7a3903ad7891934bb52e4cc4d833\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:31:16.182574 containerd[1440]: time="2024-09-04T17:31:16.182457196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:16.182574 containerd[1440]: time="2024-09-04T17:31:16.182536555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:16.182574 containerd[1440]: time="2024-09-04T17:31:16.182553847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:16.182574 containerd[1440]: time="2024-09-04T17:31:16.182567483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:16.202902 systemd[1]: Started cri-containerd-e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be.scope - libcontainer container e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be. Sep 4 17:31:16.227990 containerd[1440]: time="2024-09-04T17:31:16.227938449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hwxf8,Uid:cd1d19fc-16e8-4cb6-86b8-8997986e1264,Namespace:kube-system,Attempt:0,} returns sandbox id \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\"" Sep 4 17:31:16.228597 kubelet[2543]: E0904 17:31:16.228573 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:16.229575 containerd[1440]: time="2024-09-04T17:31:16.229516948Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:31:16.561624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831914710.mount: Deactivated successfully. Sep 4 17:31:16.617987 containerd[1440]: time="2024-09-04T17:31:16.615110308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:16.617987 containerd[1440]: time="2024-09-04T17:31:16.615821021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:16.617987 containerd[1440]: time="2024-09-04T17:31:16.615844615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:16.617987 containerd[1440]: time="2024-09-04T17:31:16.615855255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:16.633898 systemd[1]: Started cri-containerd-f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c.scope - libcontainer container f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c. Sep 4 17:31:16.670663 containerd[1440]: time="2024-09-04T17:31:16.670610915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-v7mb5,Uid:020c5a8a-fe81-4412-b8af-e84894a8c192,Namespace:kube-system,Attempt:0,} returns sandbox id \"f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c\"" Sep 4 17:31:16.671305 kubelet[2543]: E0904 17:31:16.671262 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:16.751927 containerd[1440]: time="2024-09-04T17:31:16.751876083Z" level=info msg="CreateContainer within sandbox \"5e9b394dbeeb5c929cd6d93503f4f9dc3edc7a3903ad7891934bb52e4cc4d833\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9e47a183564cda2151eba67c9c4d91342a97a889a343852dd529bc3f2771857\"" Sep 4 17:31:16.752616 containerd[1440]: time="2024-09-04T17:31:16.752340584Z" level=info msg="StartContainer for \"c9e47a183564cda2151eba67c9c4d91342a97a889a343852dd529bc3f2771857\"" Sep 4 17:31:16.783902 systemd[1]: Started cri-containerd-c9e47a183564cda2151eba67c9c4d91342a97a889a343852dd529bc3f2771857.scope - libcontainer container c9e47a183564cda2151eba67c9c4d91342a97a889a343852dd529bc3f2771857. Sep 4 17:31:16.881397 containerd[1440]: time="2024-09-04T17:31:16.881262784Z" level=info msg="StartContainer for \"c9e47a183564cda2151eba67c9c4d91342a97a889a343852dd529bc3f2771857\" returns successfully" Sep 4 17:31:17.674456 kubelet[2543]: E0904 17:31:17.674371 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:17.691858 kubelet[2543]: I0904 17:31:17.691820 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wj7fq" podStartSLOduration=2.691759068 podStartE2EDuration="2.691759068s" podCreationTimestamp="2024-09-04 17:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:17.691665773 +0000 UTC m=+16.148438746" watchObservedRunningTime="2024-09-04 17:31:17.691759068 +0000 UTC m=+16.148532041" Sep 4 17:31:18.677746 kubelet[2543]: E0904 17:31:18.677708 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:24.771957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986524440.mount: Deactivated successfully. Sep 4 17:31:27.164939 containerd[1440]: time="2024-09-04T17:31:27.164866581Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:27.165718 containerd[1440]: time="2024-09-04T17:31:27.165640954Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735299" Sep 4 17:31:27.166810 containerd[1440]: time="2024-09-04T17:31:27.166754602Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:27.168328 containerd[1440]: time="2024-09-04T17:31:27.168294621Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.938727099s" Sep 4 17:31:27.168380 containerd[1440]: time="2024-09-04T17:31:27.168326882Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 17:31:27.169121 containerd[1440]: time="2024-09-04T17:31:27.169088039Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:31:27.171494 containerd[1440]: time="2024-09-04T17:31:27.171445211Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:31:27.187490 containerd[1440]: time="2024-09-04T17:31:27.187451104Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\"" Sep 4 17:31:27.188013 containerd[1440]: time="2024-09-04T17:31:27.187983724Z" level=info msg="StartContainer for \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\"" Sep 4 17:31:27.224912 systemd[1]: Started cri-containerd-000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e.scope - libcontainer container 000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e. Sep 4 17:31:27.253831 containerd[1440]: time="2024-09-04T17:31:27.253654516Z" level=info msg="StartContainer for \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\" returns successfully" Sep 4 17:31:27.273034 systemd[1]: Started sshd@7-10.0.0.161:22-10.0.0.1:40532.service - OpenSSH per-connection server daemon (10.0.0.1:40532). Sep 4 17:31:27.273390 systemd[1]: cri-containerd-000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e.scope: Deactivated successfully. Sep 4 17:31:27.322186 sshd[2989]: Accepted publickey for core from 10.0.0.1 port 40532 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:27.323811 sshd[2989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:27.328010 systemd-logind[1428]: New session 8 of user core. Sep 4 17:31:27.338887 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:31:27.674645 sshd[2989]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:27.679237 systemd[1]: sshd@7-10.0.0.161:22-10.0.0.1:40532.service: Deactivated successfully. Sep 4 17:31:27.681437 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:31:27.682244 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:31:27.683191 systemd-logind[1428]: Removed session 8. Sep 4 17:31:27.693134 kubelet[2543]: E0904 17:31:27.693102 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:27.706571 containerd[1440]: time="2024-09-04T17:31:27.706504974Z" level=info msg="shim disconnected" id=000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e namespace=k8s.io Sep 4 17:31:27.706571 containerd[1440]: time="2024-09-04T17:31:27.706566870Z" level=warning msg="cleaning up after shim disconnected" id=000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e namespace=k8s.io Sep 4 17:31:27.706719 containerd[1440]: time="2024-09-04T17:31:27.706577690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:28.182357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e-rootfs.mount: Deactivated successfully. Sep 4 17:31:28.695550 kubelet[2543]: E0904 17:31:28.695391 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:28.697488 containerd[1440]: time="2024-09-04T17:31:28.697426469Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:31:28.791253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726043058.mount: Deactivated successfully. Sep 4 17:31:28.793428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785924766.mount: Deactivated successfully. Sep 4 17:31:28.793536 containerd[1440]: time="2024-09-04T17:31:28.793489466Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\"" Sep 4 17:31:28.794580 containerd[1440]: time="2024-09-04T17:31:28.794212562Z" level=info msg="StartContainer for \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\"" Sep 4 17:31:28.821954 systemd[1]: Started cri-containerd-6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665.scope - libcontainer container 6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665. Sep 4 17:31:28.853495 containerd[1440]: time="2024-09-04T17:31:28.853453377Z" level=info msg="StartContainer for \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\" returns successfully" Sep 4 17:31:28.867873 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:31:28.869856 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:31:28.869950 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:31:28.878239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:31:28.878509 systemd[1]: cri-containerd-6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665.scope: Deactivated successfully. Sep 4 17:31:28.900069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:31:28.936378 containerd[1440]: time="2024-09-04T17:31:28.936235532Z" level=info msg="shim disconnected" id=6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665 namespace=k8s.io Sep 4 17:31:28.936378 containerd[1440]: time="2024-09-04T17:31:28.936289813Z" level=warning msg="cleaning up after shim disconnected" id=6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665 namespace=k8s.io Sep 4 17:31:28.936378 containerd[1440]: time="2024-09-04T17:31:28.936300664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:29.104559 containerd[1440]: time="2024-09-04T17:31:29.104493470Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:29.105140 containerd[1440]: time="2024-09-04T17:31:29.105101551Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Sep 4 17:31:29.106232 containerd[1440]: time="2024-09-04T17:31:29.106189823Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:31:29.107533 containerd[1440]: time="2024-09-04T17:31:29.107487135Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.938370943s" Sep 4 17:31:29.107533 containerd[1440]: time="2024-09-04T17:31:29.107524185Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 17:31:29.109215 containerd[1440]: time="2024-09-04T17:31:29.109186794Z" level=info msg="CreateContainer within sandbox \"f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:31:29.121695 containerd[1440]: time="2024-09-04T17:31:29.121654052Z" level=info msg="CreateContainer within sandbox \"f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\"" Sep 4 17:31:29.122151 containerd[1440]: time="2024-09-04T17:31:29.122113363Z" level=info msg="StartContainer for \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\"" Sep 4 17:31:29.151930 systemd[1]: Started cri-containerd-5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862.scope - libcontainer container 5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862. Sep 4 17:31:29.178275 containerd[1440]: time="2024-09-04T17:31:29.178238216Z" level=info msg="StartContainer for \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\" returns successfully" Sep 4 17:31:29.183899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665-rootfs.mount: Deactivated successfully. Sep 4 17:31:29.708491 kubelet[2543]: E0904 17:31:29.708461 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:29.711026 kubelet[2543]: E0904 17:31:29.710625 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:29.712496 containerd[1440]: time="2024-09-04T17:31:29.712366618Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:31:29.720216 kubelet[2543]: I0904 17:31:29.719963 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-v7mb5" podStartSLOduration=2.283833638 podStartE2EDuration="14.719928293s" podCreationTimestamp="2024-09-04 17:31:15 +0000 UTC" firstStartedPulling="2024-09-04 17:31:16.671686412 +0000 UTC m=+15.128459385" lastFinishedPulling="2024-09-04 17:31:29.107781067 +0000 UTC m=+27.564554040" observedRunningTime="2024-09-04 17:31:29.719452842 +0000 UTC m=+28.176225815" watchObservedRunningTime="2024-09-04 17:31:29.719928293 +0000 UTC m=+28.176701256" Sep 4 17:31:29.745198 containerd[1440]: time="2024-09-04T17:31:29.745081337Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\"" Sep 4 17:31:29.745756 containerd[1440]: time="2024-09-04T17:31:29.745699878Z" level=info msg="StartContainer for \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\"" Sep 4 17:31:29.791196 systemd[1]: Started cri-containerd-50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da.scope - libcontainer container 50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da. Sep 4 17:31:29.840875 systemd[1]: cri-containerd-50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da.scope: Deactivated successfully. Sep 4 17:31:29.915514 containerd[1440]: time="2024-09-04T17:31:29.915435161Z" level=info msg="StartContainer for \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\" returns successfully" Sep 4 17:31:29.950430 containerd[1440]: time="2024-09-04T17:31:29.950360928Z" level=info msg="shim disconnected" id=50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da namespace=k8s.io Sep 4 17:31:29.950430 containerd[1440]: time="2024-09-04T17:31:29.950422644Z" level=warning msg="cleaning up after shim disconnected" id=50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da namespace=k8s.io Sep 4 17:31:29.950430 containerd[1440]: time="2024-09-04T17:31:29.950432362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:29.964424 containerd[1440]: time="2024-09-04T17:31:29.964318991Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:31:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:31:30.182554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da-rootfs.mount: Deactivated successfully. Sep 4 17:31:30.714466 kubelet[2543]: E0904 17:31:30.714427 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:30.715354 kubelet[2543]: E0904 17:31:30.714518 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:30.716759 containerd[1440]: time="2024-09-04T17:31:30.716715489Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:31:30.734162 containerd[1440]: time="2024-09-04T17:31:30.734096495Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\"" Sep 4 17:31:30.734672 containerd[1440]: time="2024-09-04T17:31:30.734639774Z" level=info msg="StartContainer for \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\"" Sep 4 17:31:30.766914 systemd[1]: Started cri-containerd-5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485.scope - libcontainer container 5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485. Sep 4 17:31:30.792917 systemd[1]: cri-containerd-5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485.scope: Deactivated successfully. Sep 4 17:31:30.794878 containerd[1440]: time="2024-09-04T17:31:30.794833895Z" level=info msg="StartContainer for \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\" returns successfully" Sep 4 17:31:30.819614 containerd[1440]: time="2024-09-04T17:31:30.819540072Z" level=info msg="shim disconnected" id=5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485 namespace=k8s.io Sep 4 17:31:30.819614 containerd[1440]: time="2024-09-04T17:31:30.819606477Z" level=warning msg="cleaning up after shim disconnected" id=5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485 namespace=k8s.io Sep 4 17:31:30.819614 containerd[1440]: time="2024-09-04T17:31:30.819616075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:31:31.183045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485-rootfs.mount: Deactivated successfully. Sep 4 17:31:31.718094 kubelet[2543]: E0904 17:31:31.718049 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:31.722201 containerd[1440]: time="2024-09-04T17:31:31.722146866Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:31:31.932099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297786475.mount: Deactivated successfully. Sep 4 17:31:32.082345 containerd[1440]: time="2024-09-04T17:31:32.082276860Z" level=info msg="CreateContainer within sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\"" Sep 4 17:31:32.082915 containerd[1440]: time="2024-09-04T17:31:32.082872077Z" level=info msg="StartContainer for \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\"" Sep 4 17:31:32.115923 systemd[1]: Started cri-containerd-dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080.scope - libcontainer container dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080. Sep 4 17:31:32.147347 containerd[1440]: time="2024-09-04T17:31:32.147260553Z" level=info msg="StartContainer for \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\" returns successfully" Sep 4 17:31:32.221893 kubelet[2543]: I0904 17:31:32.221850 2543 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:31:32.239163 kubelet[2543]: I0904 17:31:32.238142 2543 topology_manager.go:215] "Topology Admit Handler" podUID="ca1db657-e98c-4009-95c1-a4be477a7dc2" podNamespace="kube-system" podName="coredns-76f75df574-4c8zw" Sep 4 17:31:32.240703 kubelet[2543]: I0904 17:31:32.240396 2543 topology_manager.go:215] "Topology Admit Handler" podUID="00a261b0-9166-4a0c-81dd-a0d39d4f200f" podNamespace="kube-system" podName="coredns-76f75df574-w6lrs" Sep 4 17:31:32.250056 systemd[1]: Created slice kubepods-burstable-podca1db657_e98c_4009_95c1_a4be477a7dc2.slice - libcontainer container kubepods-burstable-podca1db657_e98c_4009_95c1_a4be477a7dc2.slice. Sep 4 17:31:32.258127 systemd[1]: Created slice kubepods-burstable-pod00a261b0_9166_4a0c_81dd_a0d39d4f200f.slice - libcontainer container kubepods-burstable-pod00a261b0_9166_4a0c_81dd_a0d39d4f200f.slice. Sep 4 17:31:32.408646 kubelet[2543]: I0904 17:31:32.408415 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca1db657-e98c-4009-95c1-a4be477a7dc2-config-volume\") pod \"coredns-76f75df574-4c8zw\" (UID: \"ca1db657-e98c-4009-95c1-a4be477a7dc2\") " pod="kube-system/coredns-76f75df574-4c8zw" Sep 4 17:31:32.408646 kubelet[2543]: I0904 17:31:32.408469 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00a261b0-9166-4a0c-81dd-a0d39d4f200f-config-volume\") pod \"coredns-76f75df574-w6lrs\" (UID: \"00a261b0-9166-4a0c-81dd-a0d39d4f200f\") " pod="kube-system/coredns-76f75df574-w6lrs" Sep 4 17:31:32.408646 kubelet[2543]: I0904 17:31:32.408491 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898cx\" (UniqueName: \"kubernetes.io/projected/00a261b0-9166-4a0c-81dd-a0d39d4f200f-kube-api-access-898cx\") pod \"coredns-76f75df574-w6lrs\" (UID: \"00a261b0-9166-4a0c-81dd-a0d39d4f200f\") " pod="kube-system/coredns-76f75df574-w6lrs" Sep 4 17:31:32.408646 kubelet[2543]: I0904 17:31:32.408512 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrl5j\" (UniqueName: \"kubernetes.io/projected/ca1db657-e98c-4009-95c1-a4be477a7dc2-kube-api-access-rrl5j\") pod \"coredns-76f75df574-4c8zw\" (UID: \"ca1db657-e98c-4009-95c1-a4be477a7dc2\") " pod="kube-system/coredns-76f75df574-4c8zw" Sep 4 17:31:32.555171 kubelet[2543]: E0904 17:31:32.555140 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:32.555704 containerd[1440]: time="2024-09-04T17:31:32.555662051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4c8zw,Uid:ca1db657-e98c-4009-95c1-a4be477a7dc2,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:32.564257 kubelet[2543]: E0904 17:31:32.564203 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:32.565047 containerd[1440]: time="2024-09-04T17:31:32.564788453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-w6lrs,Uid:00a261b0-9166-4a0c-81dd-a0d39d4f200f,Namespace:kube-system,Attempt:0,}" Sep 4 17:31:32.684241 systemd[1]: Started sshd@8-10.0.0.161:22-10.0.0.1:40542.service - OpenSSH per-connection server daemon (10.0.0.1:40542). Sep 4 17:31:32.722729 kubelet[2543]: E0904 17:31:32.722668 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:32.731064 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 40542 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:32.733084 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:32.739696 systemd-logind[1428]: New session 9 of user core. Sep 4 17:31:32.741530 kubelet[2543]: I0904 17:31:32.740192 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hwxf8" podStartSLOduration=6.800461507 podStartE2EDuration="17.740152533s" podCreationTimestamp="2024-09-04 17:31:15 +0000 UTC" firstStartedPulling="2024-09-04 17:31:16.229088144 +0000 UTC m=+14.685861117" lastFinishedPulling="2024-09-04 17:31:27.16877917 +0000 UTC m=+25.625552143" observedRunningTime="2024-09-04 17:31:32.739111269 +0000 UTC m=+31.195884253" watchObservedRunningTime="2024-09-04 17:31:32.740152533 +0000 UTC m=+31.196925506" Sep 4 17:31:32.746946 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:31:32.890390 sshd[3409]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:32.894805 systemd[1]: sshd@8-10.0.0.161:22-10.0.0.1:40542.service: Deactivated successfully. Sep 4 17:31:32.896928 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:31:32.897635 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:31:32.898607 systemd-logind[1428]: Removed session 9. Sep 4 17:31:33.724366 kubelet[2543]: E0904 17:31:33.724327 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:34.408499 systemd-networkd[1374]: cilium_host: Link UP Sep 4 17:31:34.408764 systemd-networkd[1374]: cilium_net: Link UP Sep 4 17:31:34.409067 systemd-networkd[1374]: cilium_net: Gained carrier Sep 4 17:31:34.409332 systemd-networkd[1374]: cilium_host: Gained carrier Sep 4 17:31:34.532280 systemd-networkd[1374]: cilium_vxlan: Link UP Sep 4 17:31:34.532293 systemd-networkd[1374]: cilium_vxlan: Gained carrier Sep 4 17:31:34.629917 systemd-networkd[1374]: cilium_net: Gained IPv6LL Sep 4 17:31:34.725741 kubelet[2543]: E0904 17:31:34.725600 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:34.740815 kernel: NET: Registered PF_ALG protocol family Sep 4 17:31:35.166937 systemd-networkd[1374]: cilium_host: Gained IPv6LL Sep 4 17:31:35.418629 systemd-networkd[1374]: lxc_health: Link UP Sep 4 17:31:35.431204 systemd-networkd[1374]: lxc_health: Gained carrier Sep 4 17:31:35.637682 systemd-networkd[1374]: lxc73c7b31723a8: Link UP Sep 4 17:31:35.644807 kernel: eth0: renamed from tmpa494c Sep 4 17:31:35.650281 systemd-networkd[1374]: lxc8dee294e1c8b: Link UP Sep 4 17:31:35.657470 systemd-networkd[1374]: lxc73c7b31723a8: Gained carrier Sep 4 17:31:35.658725 kernel: eth0: renamed from tmp31464 Sep 4 17:31:35.668402 systemd-networkd[1374]: lxc8dee294e1c8b: Gained carrier Sep 4 17:31:36.381967 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Sep 4 17:31:36.701930 systemd-networkd[1374]: lxc8dee294e1c8b: Gained IPv6LL Sep 4 17:31:36.765883 systemd-networkd[1374]: lxc73c7b31723a8: Gained IPv6LL Sep 4 17:31:36.978073 kubelet[2543]: E0904 17:31:36.977944 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:37.277893 systemd-networkd[1374]: lxc_health: Gained IPv6LL Sep 4 17:31:37.732356 kubelet[2543]: E0904 17:31:37.731453 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:37.904090 systemd[1]: Started sshd@9-10.0.0.161:22-10.0.0.1:47080.service - OpenSSH per-connection server daemon (10.0.0.1:47080). Sep 4 17:31:37.958665 sshd[3805]: Accepted publickey for core from 10.0.0.1 port 47080 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:37.960295 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:37.964279 systemd-logind[1428]: New session 10 of user core. Sep 4 17:31:37.970912 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:31:38.098304 sshd[3805]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:38.102837 systemd[1]: sshd@9-10.0.0.161:22-10.0.0.1:47080.service: Deactivated successfully. Sep 4 17:31:38.105283 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:31:38.106104 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:31:38.107054 systemd-logind[1428]: Removed session 10. Sep 4 17:31:38.733374 kubelet[2543]: E0904 17:31:38.733321 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:38.948672 containerd[1440]: time="2024-09-04T17:31:38.948444173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:38.948672 containerd[1440]: time="2024-09-04T17:31:38.948501731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:38.948672 containerd[1440]: time="2024-09-04T17:31:38.948520787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:38.948672 containerd[1440]: time="2024-09-04T17:31:38.948534753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:38.950136 containerd[1440]: time="2024-09-04T17:31:38.949920383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:31:38.950237 containerd[1440]: time="2024-09-04T17:31:38.950193315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:38.950352 containerd[1440]: time="2024-09-04T17:31:38.950228130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:31:38.950352 containerd[1440]: time="2024-09-04T17:31:38.950327697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:31:38.980903 systemd[1]: Started cri-containerd-31464eea19fdcd50cbc91f304ae62b2b424e5b97411a11fadc134567ac05cab7.scope - libcontainer container 31464eea19fdcd50cbc91f304ae62b2b424e5b97411a11fadc134567ac05cab7. Sep 4 17:31:38.982534 systemd[1]: Started cri-containerd-a494c69a2e8ef1bf8a74b337e05208acdc8fa3f35538191ad2905a9f60b283e7.scope - libcontainer container a494c69a2e8ef1bf8a74b337e05208acdc8fa3f35538191ad2905a9f60b283e7. Sep 4 17:31:38.994603 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:31:38.996384 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:31:39.024341 containerd[1440]: time="2024-09-04T17:31:39.024203369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-w6lrs,Uid:00a261b0-9166-4a0c-81dd-a0d39d4f200f,Namespace:kube-system,Attempt:0,} returns sandbox id \"31464eea19fdcd50cbc91f304ae62b2b424e5b97411a11fadc134567ac05cab7\"" Sep 4 17:31:39.024994 containerd[1440]: time="2024-09-04T17:31:39.024419244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4c8zw,Uid:ca1db657-e98c-4009-95c1-a4be477a7dc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a494c69a2e8ef1bf8a74b337e05208acdc8fa3f35538191ad2905a9f60b283e7\"" Sep 4 17:31:39.025411 kubelet[2543]: E0904 17:31:39.025379 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:39.025632 kubelet[2543]: E0904 17:31:39.025611 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:39.028662 containerd[1440]: time="2024-09-04T17:31:39.028548853Z" level=info msg="CreateContainer within sandbox \"a494c69a2e8ef1bf8a74b337e05208acdc8fa3f35538191ad2905a9f60b283e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:31:39.028794 containerd[1440]: time="2024-09-04T17:31:39.028555325Z" level=info msg="CreateContainer within sandbox \"31464eea19fdcd50cbc91f304ae62b2b424e5b97411a11fadc134567ac05cab7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:31:39.047382 containerd[1440]: time="2024-09-04T17:31:39.047317328Z" level=info msg="CreateContainer within sandbox \"31464eea19fdcd50cbc91f304ae62b2b424e5b97411a11fadc134567ac05cab7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c450cfaaecb445aab20c1a31be1f501849c378044aad623f2107e2a140fa091f\"" Sep 4 17:31:39.047860 containerd[1440]: time="2024-09-04T17:31:39.047759718Z" level=info msg="StartContainer for \"c450cfaaecb445aab20c1a31be1f501849c378044aad623f2107e2a140fa091f\"" Sep 4 17:31:39.054255 containerd[1440]: time="2024-09-04T17:31:39.054216193Z" level=info msg="CreateContainer within sandbox \"a494c69a2e8ef1bf8a74b337e05208acdc8fa3f35538191ad2905a9f60b283e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"088add9cb5da85962fe5cffeb5de308354e778118ad21655848840a4c54ceab0\"" Sep 4 17:31:39.054626 containerd[1440]: time="2024-09-04T17:31:39.054591267Z" level=info msg="StartContainer for \"088add9cb5da85962fe5cffeb5de308354e778118ad21655848840a4c54ceab0\"" Sep 4 17:31:39.080906 systemd[1]: Started cri-containerd-c450cfaaecb445aab20c1a31be1f501849c378044aad623f2107e2a140fa091f.scope - libcontainer container c450cfaaecb445aab20c1a31be1f501849c378044aad623f2107e2a140fa091f. Sep 4 17:31:39.083881 systemd[1]: Started cri-containerd-088add9cb5da85962fe5cffeb5de308354e778118ad21655848840a4c54ceab0.scope - libcontainer container 088add9cb5da85962fe5cffeb5de308354e778118ad21655848840a4c54ceab0. Sep 4 17:31:39.113041 containerd[1440]: time="2024-09-04T17:31:39.112990071Z" level=info msg="StartContainer for \"c450cfaaecb445aab20c1a31be1f501849c378044aad623f2107e2a140fa091f\" returns successfully" Sep 4 17:31:39.113305 containerd[1440]: time="2024-09-04T17:31:39.113122069Z" level=info msg="StartContainer for \"088add9cb5da85962fe5cffeb5de308354e778118ad21655848840a4c54ceab0\" returns successfully" Sep 4 17:31:39.740801 kubelet[2543]: E0904 17:31:39.740739 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:39.743263 kubelet[2543]: E0904 17:31:39.743198 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:39.748101 kubelet[2543]: I0904 17:31:39.748069 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-w6lrs" podStartSLOduration=24.748030014 podStartE2EDuration="24.748030014s" podCreationTimestamp="2024-09-04 17:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:39.74731398 +0000 UTC m=+38.204086953" watchObservedRunningTime="2024-09-04 17:31:39.748030014 +0000 UTC m=+38.204802987" Sep 4 17:31:39.756710 kubelet[2543]: I0904 17:31:39.756660 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4c8zw" podStartSLOduration=24.756614602 podStartE2EDuration="24.756614602s" podCreationTimestamp="2024-09-04 17:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:31:39.755581694 +0000 UTC m=+38.212354687" watchObservedRunningTime="2024-09-04 17:31:39.756614602 +0000 UTC m=+38.213387575" Sep 4 17:31:40.745466 kubelet[2543]: E0904 17:31:40.745423 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:40.745947 kubelet[2543]: E0904 17:31:40.745738 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:41.747571 kubelet[2543]: E0904 17:31:41.747522 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:41.748133 kubelet[2543]: E0904 17:31:41.747538 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:31:43.111410 systemd[1]: Started sshd@10-10.0.0.161:22-10.0.0.1:47096.service - OpenSSH per-connection server daemon (10.0.0.1:47096). Sep 4 17:31:43.153463 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 47096 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:43.155587 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:43.160162 systemd-logind[1428]: New session 11 of user core. Sep 4 17:31:43.167951 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:31:43.396614 sshd[3994]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:43.407102 systemd[1]: sshd@10-10.0.0.161:22-10.0.0.1:47096.service: Deactivated successfully. Sep 4 17:31:43.409481 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:31:43.411357 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:31:43.421048 systemd[1]: Started sshd@11-10.0.0.161:22-10.0.0.1:47104.service - OpenSSH per-connection server daemon (10.0.0.1:47104). Sep 4 17:31:43.422014 systemd-logind[1428]: Removed session 11. Sep 4 17:31:43.455426 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 47104 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:43.457254 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:43.461566 systemd-logind[1428]: New session 12 of user core. Sep 4 17:31:43.472930 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:31:43.628778 sshd[4009]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:43.640603 systemd[1]: sshd@11-10.0.0.161:22-10.0.0.1:47104.service: Deactivated successfully. Sep 4 17:31:43.642430 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:31:43.644243 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:31:43.654026 systemd[1]: Started sshd@12-10.0.0.161:22-10.0.0.1:47120.service - OpenSSH per-connection server daemon (10.0.0.1:47120). Sep 4 17:31:43.655041 systemd-logind[1428]: Removed session 12. Sep 4 17:31:43.691785 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 47120 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:43.693221 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:43.697505 systemd-logind[1428]: New session 13 of user core. Sep 4 17:31:43.704892 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:31:43.823023 sshd[4023]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:43.827584 systemd[1]: sshd@12-10.0.0.161:22-10.0.0.1:47120.service: Deactivated successfully. Sep 4 17:31:43.830127 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:31:43.830796 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:31:43.831861 systemd-logind[1428]: Removed session 13. Sep 4 17:31:48.835033 systemd[1]: Started sshd@13-10.0.0.161:22-10.0.0.1:50572.service - OpenSSH per-connection server daemon (10.0.0.1:50572). Sep 4 17:31:48.874496 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 50572 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:48.876206 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:48.879931 systemd-logind[1428]: New session 14 of user core. Sep 4 17:31:48.890894 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:31:49.004627 sshd[4039]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:49.009586 systemd[1]: sshd@13-10.0.0.161:22-10.0.0.1:50572.service: Deactivated successfully. Sep 4 17:31:49.012147 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:31:49.012865 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:31:49.013803 systemd-logind[1428]: Removed session 14. Sep 4 17:31:54.016079 systemd[1]: Started sshd@14-10.0.0.161:22-10.0.0.1:50576.service - OpenSSH per-connection server daemon (10.0.0.1:50576). Sep 4 17:31:54.055463 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 50576 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:54.057206 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:54.061278 systemd-logind[1428]: New session 15 of user core. Sep 4 17:31:54.068039 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:31:54.172797 sshd[4053]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:54.183728 systemd[1]: sshd@14-10.0.0.161:22-10.0.0.1:50576.service: Deactivated successfully. Sep 4 17:31:54.185687 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:31:54.187436 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:31:54.197077 systemd[1]: Started sshd@15-10.0.0.161:22-10.0.0.1:50592.service - OpenSSH per-connection server daemon (10.0.0.1:50592). Sep 4 17:31:54.197923 systemd-logind[1428]: Removed session 15. Sep 4 17:31:54.231791 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 50592 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:54.233274 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:54.237077 systemd-logind[1428]: New session 16 of user core. Sep 4 17:31:54.246907 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:31:54.525356 sshd[4068]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:54.537101 systemd[1]: sshd@15-10.0.0.161:22-10.0.0.1:50592.service: Deactivated successfully. Sep 4 17:31:54.539164 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:31:54.541069 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:31:54.550098 systemd[1]: Started sshd@16-10.0.0.161:22-10.0.0.1:50598.service - OpenSSH per-connection server daemon (10.0.0.1:50598). Sep 4 17:31:54.551248 systemd-logind[1428]: Removed session 16. Sep 4 17:31:54.588832 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 50598 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:54.590340 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:54.594400 systemd-logind[1428]: New session 17 of user core. Sep 4 17:31:54.610892 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:31:55.846257 sshd[4080]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:55.855324 systemd[1]: sshd@16-10.0.0.161:22-10.0.0.1:50598.service: Deactivated successfully. Sep 4 17:31:55.859634 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:31:55.861994 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:31:55.870314 systemd[1]: Started sshd@17-10.0.0.161:22-10.0.0.1:50614.service - OpenSSH per-connection server daemon (10.0.0.1:50614). Sep 4 17:31:55.871542 systemd-logind[1428]: Removed session 17. Sep 4 17:31:55.905219 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 50614 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:55.906883 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:55.911276 systemd-logind[1428]: New session 18 of user core. Sep 4 17:31:55.920897 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:31:56.170276 sshd[4101]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:56.178919 systemd[1]: sshd@17-10.0.0.161:22-10.0.0.1:50614.service: Deactivated successfully. Sep 4 17:31:56.180993 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:31:56.182954 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:31:56.195185 systemd[1]: Started sshd@18-10.0.0.161:22-10.0.0.1:50626.service - OpenSSH per-connection server daemon (10.0.0.1:50626). Sep 4 17:31:56.196232 systemd-logind[1428]: Removed session 18. Sep 4 17:31:56.229086 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 50626 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:31:56.230801 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:31:56.234952 systemd-logind[1428]: New session 19 of user core. Sep 4 17:31:56.242923 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:31:56.344762 sshd[4113]: pam_unix(sshd:session): session closed for user core Sep 4 17:31:56.349799 systemd[1]: sshd@18-10.0.0.161:22-10.0.0.1:50626.service: Deactivated successfully. Sep 4 17:31:56.352050 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:31:56.352762 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:31:56.353688 systemd-logind[1428]: Removed session 19. Sep 4 17:32:01.355557 systemd[1]: Started sshd@19-10.0.0.161:22-10.0.0.1:57832.service - OpenSSH per-connection server daemon (10.0.0.1:57832). Sep 4 17:32:01.392944 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 57832 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:32:01.394514 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:01.398416 systemd-logind[1428]: New session 20 of user core. Sep 4 17:32:01.408945 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:32:01.515651 sshd[4127]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:01.520614 systemd[1]: sshd@19-10.0.0.161:22-10.0.0.1:57832.service: Deactivated successfully. Sep 4 17:32:01.522805 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:32:01.523533 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:32:01.524498 systemd-logind[1428]: Removed session 20. Sep 4 17:32:06.527664 systemd[1]: Started sshd@20-10.0.0.161:22-10.0.0.1:40254.service - OpenSSH per-connection server daemon (10.0.0.1:40254). Sep 4 17:32:06.565716 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 40254 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:32:06.567193 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:06.571174 systemd-logind[1428]: New session 21 of user core. Sep 4 17:32:06.586895 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:32:06.695339 sshd[4146]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:06.699364 systemd[1]: sshd@20-10.0.0.161:22-10.0.0.1:40254.service: Deactivated successfully. Sep 4 17:32:06.701374 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:32:06.702053 systemd-logind[1428]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:32:06.702938 systemd-logind[1428]: Removed session 21. Sep 4 17:32:11.710894 systemd[1]: Started sshd@21-10.0.0.161:22-10.0.0.1:40260.service - OpenSSH per-connection server daemon (10.0.0.1:40260). Sep 4 17:32:11.748755 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 40260 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:32:11.750122 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:11.753894 systemd-logind[1428]: New session 22 of user core. Sep 4 17:32:11.763899 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:32:11.867858 sshd[4160]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:11.871978 systemd[1]: sshd@21-10.0.0.161:22-10.0.0.1:40260.service: Deactivated successfully. Sep 4 17:32:11.873938 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:32:11.874487 systemd-logind[1428]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:32:11.875269 systemd-logind[1428]: Removed session 22. Sep 4 17:32:13.638505 kubelet[2543]: E0904 17:32:13.638472 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:16.878902 systemd[1]: Started sshd@22-10.0.0.161:22-10.0.0.1:51754.service - OpenSSH per-connection server daemon (10.0.0.1:51754). Sep 4 17:32:16.916996 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 51754 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:32:16.918563 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:16.922738 systemd-logind[1428]: New session 23 of user core. Sep 4 17:32:16.933898 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:32:17.038309 sshd[4174]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:17.050101 systemd[1]: sshd@22-10.0.0.161:22-10.0.0.1:51754.service: Deactivated successfully. Sep 4 17:32:17.052358 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:32:17.054042 systemd-logind[1428]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:32:17.059079 systemd[1]: Started sshd@23-10.0.0.161:22-10.0.0.1:51760.service - OpenSSH per-connection server daemon (10.0.0.1:51760). Sep 4 17:32:17.060063 systemd-logind[1428]: Removed session 23. Sep 4 17:32:17.094346 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:32:17.095760 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:17.099739 systemd-logind[1428]: New session 24 of user core. Sep 4 17:32:17.111975 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:32:18.424179 containerd[1440]: time="2024-09-04T17:32:18.424130572Z" level=info msg="StopContainer for \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\" with timeout 30 (s)" Sep 4 17:32:18.431785 containerd[1440]: time="2024-09-04T17:32:18.431720497Z" level=info msg="Stop container \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\" with signal terminated" Sep 4 17:32:18.446810 systemd[1]: cri-containerd-5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862.scope: Deactivated successfully. Sep 4 17:32:18.467216 containerd[1440]: time="2024-09-04T17:32:18.467168595Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:32:18.469620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862-rootfs.mount: Deactivated successfully. Sep 4 17:32:18.479996 containerd[1440]: time="2024-09-04T17:32:18.479930045Z" level=info msg="shim disconnected" id=5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862 namespace=k8s.io Sep 4 17:32:18.479996 containerd[1440]: time="2024-09-04T17:32:18.479990441Z" level=warning msg="cleaning up after shim disconnected" id=5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862 namespace=k8s.io Sep 4 17:32:18.480119 containerd[1440]: time="2024-09-04T17:32:18.479999759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:18.500079 containerd[1440]: time="2024-09-04T17:32:18.500042125Z" level=info msg="StopContainer for \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\" with timeout 2 (s)" Sep 4 17:32:18.500525 containerd[1440]: time="2024-09-04T17:32:18.500480632Z" level=info msg="Stop container \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\" with signal terminated" Sep 4 17:32:18.508816 systemd-networkd[1374]: lxc_health: Link DOWN Sep 4 17:32:18.508825 systemd-networkd[1374]: lxc_health: Lost carrier Sep 4 17:32:18.520916 containerd[1440]: time="2024-09-04T17:32:18.520865363Z" level=info msg="StopContainer for \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\" returns successfully" Sep 4 17:32:18.525329 containerd[1440]: time="2024-09-04T17:32:18.525296585Z" level=info msg="StopPodSandbox for \"f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c\"" Sep 4 17:32:18.525433 containerd[1440]: time="2024-09-04T17:32:18.525344567Z" level=info msg="Container to stop \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:32:18.527466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c-shm.mount: Deactivated successfully. Sep 4 17:32:18.534726 systemd[1]: cri-containerd-f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c.scope: Deactivated successfully. Sep 4 17:32:18.535961 systemd[1]: cri-containerd-dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080.scope: Deactivated successfully. Sep 4 17:32:18.536214 systemd[1]: cri-containerd-dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080.scope: Consumed 6.738s CPU time. Sep 4 17:32:18.558454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080-rootfs.mount: Deactivated successfully. Sep 4 17:32:18.558592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c-rootfs.mount: Deactivated successfully. Sep 4 17:32:18.562733 containerd[1440]: time="2024-09-04T17:32:18.562627929Z" level=info msg="shim disconnected" id=f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c namespace=k8s.io Sep 4 17:32:18.562862 containerd[1440]: time="2024-09-04T17:32:18.562694877Z" level=warning msg="cleaning up after shim disconnected" id=f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c namespace=k8s.io Sep 4 17:32:18.562862 containerd[1440]: time="2024-09-04T17:32:18.562756825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:18.564473 containerd[1440]: time="2024-09-04T17:32:18.562756575Z" level=info msg="shim disconnected" id=dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080 namespace=k8s.io Sep 4 17:32:18.564473 containerd[1440]: time="2024-09-04T17:32:18.564464015Z" level=warning msg="cleaning up after shim disconnected" id=dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080 namespace=k8s.io Sep 4 17:32:18.564473 containerd[1440]: time="2024-09-04T17:32:18.564475077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:18.580355 containerd[1440]: time="2024-09-04T17:32:18.580312703Z" level=info msg="TearDown network for sandbox \"f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c\" successfully" Sep 4 17:32:18.580355 containerd[1440]: time="2024-09-04T17:32:18.580339003Z" level=info msg="StopPodSandbox for \"f29d9b928f2cb10881a66ea952fc35d8c62030b3df910d37a0a9285d2c8dd47c\" returns successfully" Sep 4 17:32:18.583136 containerd[1440]: time="2024-09-04T17:32:18.583105045Z" level=info msg="StopContainer for \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\" returns successfully" Sep 4 17:32:18.583490 containerd[1440]: time="2024-09-04T17:32:18.583465404Z" level=info msg="StopPodSandbox for \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\"" Sep 4 17:32:18.583559 containerd[1440]: time="2024-09-04T17:32:18.583498226Z" level=info msg="Container to stop \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:32:18.583559 containerd[1440]: time="2024-09-04T17:32:18.583530598Z" level=info msg="Container to stop \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:32:18.583559 containerd[1440]: time="2024-09-04T17:32:18.583542481Z" level=info msg="Container to stop \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:32:18.583559 containerd[1440]: time="2024-09-04T17:32:18.583553141Z" level=info msg="Container to stop \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:32:18.583671 containerd[1440]: time="2024-09-04T17:32:18.583563651Z" level=info msg="Container to stop \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:32:18.591345 systemd[1]: cri-containerd-e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be.scope: Deactivated successfully. Sep 4 17:32:18.617323 containerd[1440]: time="2024-09-04T17:32:18.617261527Z" level=info msg="shim disconnected" id=e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be namespace=k8s.io Sep 4 17:32:18.617591 containerd[1440]: time="2024-09-04T17:32:18.617563694Z" level=warning msg="cleaning up after shim disconnected" id=e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be namespace=k8s.io Sep 4 17:32:18.617591 containerd[1440]: time="2024-09-04T17:32:18.617579724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:18.632143 containerd[1440]: time="2024-09-04T17:32:18.632100415Z" level=info msg="TearDown network for sandbox \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" successfully" Sep 4 17:32:18.632143 containerd[1440]: time="2024-09-04T17:32:18.632134340Z" level=info msg="StopPodSandbox for \"e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be\" returns successfully" Sep 4 17:32:18.749896 kubelet[2543]: I0904 17:32:18.749708 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-xtables-lock\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.749896 kubelet[2543]: I0904 17:32:18.749761 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-host-proc-sys-net\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.749896 kubelet[2543]: I0904 17:32:18.749816 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd1d19fc-16e8-4cb6-86b8-8997986e1264-hubble-tls\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.749896 kubelet[2543]: I0904 17:32:18.749842 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwxg\" (UniqueName: \"kubernetes.io/projected/020c5a8a-fe81-4412-b8af-e84894a8c192-kube-api-access-7rwxg\") pod \"020c5a8a-fe81-4412-b8af-e84894a8c192\" (UID: \"020c5a8a-fe81-4412-b8af-e84894a8c192\") " Sep 4 17:32:18.749896 kubelet[2543]: I0904 17:32:18.749865 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-host-proc-sys-kernel\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.749896 kubelet[2543]: I0904 17:32:18.749885 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-lib-modules\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750633 kubelet[2543]: I0904 17:32:18.749912 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/020c5a8a-fe81-4412-b8af-e84894a8c192-cilium-config-path\") pod \"020c5a8a-fe81-4412-b8af-e84894a8c192\" (UID: \"020c5a8a-fe81-4412-b8af-e84894a8c192\") " Sep 4 17:32:18.750633 kubelet[2543]: I0904 17:32:18.749894 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.750633 kubelet[2543]: I0904 17:32:18.749960 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.750633 kubelet[2543]: I0904 17:32:18.749936 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-run\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750633 kubelet[2543]: I0904 17:32:18.750015 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-hostproc\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750633 kubelet[2543]: I0904 17:32:18.750038 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-cgroup\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750879 kubelet[2543]: I0904 17:32:18.750066 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-config-path\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750879 kubelet[2543]: I0904 17:32:18.750085 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-bpf-maps\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750879 kubelet[2543]: I0904 17:32:18.750102 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-etc-cni-netd\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750879 kubelet[2543]: I0904 17:32:18.750121 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cni-path\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750879 kubelet[2543]: I0904 17:32:18.750145 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxcmx\" (UniqueName: \"kubernetes.io/projected/cd1d19fc-16e8-4cb6-86b8-8997986e1264-kube-api-access-qxcmx\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.750879 kubelet[2543]: I0904 17:32:18.750165 2543 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd1d19fc-16e8-4cb6-86b8-8997986e1264-clustermesh-secrets\") pod \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\" (UID: \"cd1d19fc-16e8-4cb6-86b8-8997986e1264\") " Sep 4 17:32:18.751175 kubelet[2543]: I0904 17:32:18.750215 2543 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.751175 kubelet[2543]: I0904 17:32:18.750227 2543 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.751175 kubelet[2543]: I0904 17:32:18.750600 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.751175 kubelet[2543]: I0904 17:32:18.750630 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.751175 kubelet[2543]: I0904 17:32:18.750650 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-hostproc" (OuterVolumeSpecName: "hostproc") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.751175 kubelet[2543]: I0904 17:32:18.750665 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.754174 kubelet[2543]: I0904 17:32:18.754129 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:32:18.754261 kubelet[2543]: I0904 17:32:18.754180 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cni-path" (OuterVolumeSpecName: "cni-path") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.754261 kubelet[2543]: I0904 17:32:18.754198 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.755167 kubelet[2543]: I0904 17:32:18.754490 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1d19fc-16e8-4cb6-86b8-8997986e1264-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:32:18.755167 kubelet[2543]: I0904 17:32:18.754555 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.755167 kubelet[2543]: I0904 17:32:18.754574 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:32:18.755167 kubelet[2543]: I0904 17:32:18.755123 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1d19fc-16e8-4cb6-86b8-8997986e1264-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:32:18.755706 kubelet[2543]: I0904 17:32:18.755652 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020c5a8a-fe81-4412-b8af-e84894a8c192-kube-api-access-7rwxg" (OuterVolumeSpecName: "kube-api-access-7rwxg") pod "020c5a8a-fe81-4412-b8af-e84894a8c192" (UID: "020c5a8a-fe81-4412-b8af-e84894a8c192"). InnerVolumeSpecName "kube-api-access-7rwxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:32:18.757697 kubelet[2543]: I0904 17:32:18.757661 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1d19fc-16e8-4cb6-86b8-8997986e1264-kube-api-access-qxcmx" (OuterVolumeSpecName: "kube-api-access-qxcmx") pod "cd1d19fc-16e8-4cb6-86b8-8997986e1264" (UID: "cd1d19fc-16e8-4cb6-86b8-8997986e1264"). InnerVolumeSpecName "kube-api-access-qxcmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:32:18.758027 kubelet[2543]: I0904 17:32:18.758000 2543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020c5a8a-fe81-4412-b8af-e84894a8c192-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "020c5a8a-fe81-4412-b8af-e84894a8c192" (UID: "020c5a8a-fe81-4412-b8af-e84894a8c192"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:32:18.812568 kubelet[2543]: I0904 17:32:18.812532 2543 scope.go:117] "RemoveContainer" containerID="dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080" Sep 4 17:32:18.813815 containerd[1440]: time="2024-09-04T17:32:18.813749923Z" level=info msg="RemoveContainer for \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\"" Sep 4 17:32:18.820631 systemd[1]: Removed slice kubepods-burstable-podcd1d19fc_16e8_4cb6_86b8_8997986e1264.slice - libcontainer container kubepods-burstable-podcd1d19fc_16e8_4cb6_86b8_8997986e1264.slice. Sep 4 17:32:18.820734 systemd[1]: kubepods-burstable-podcd1d19fc_16e8_4cb6_86b8_8997986e1264.slice: Consumed 6.845s CPU time. Sep 4 17:32:18.821984 systemd[1]: Removed slice kubepods-besteffort-pod020c5a8a_fe81_4412_b8af_e84894a8c192.slice - libcontainer container kubepods-besteffort-pod020c5a8a_fe81_4412_b8af_e84894a8c192.slice. Sep 4 17:32:18.825597 containerd[1440]: time="2024-09-04T17:32:18.825554427Z" level=info msg="RemoveContainer for \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\" returns successfully" Sep 4 17:32:18.825867 kubelet[2543]: I0904 17:32:18.825829 2543 scope.go:117] "RemoveContainer" containerID="5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485" Sep 4 17:32:18.826922 containerd[1440]: time="2024-09-04T17:32:18.826872333Z" level=info msg="RemoveContainer for \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\"" Sep 4 17:32:18.831174 containerd[1440]: time="2024-09-04T17:32:18.830990778Z" level=info msg="RemoveContainer for \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\" returns successfully" Sep 4 17:32:18.831676 kubelet[2543]: I0904 17:32:18.831178 2543 scope.go:117] "RemoveContainer" containerID="50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da" Sep 4 17:32:18.832617 containerd[1440]: time="2024-09-04T17:32:18.832548023Z" level=info msg="RemoveContainer for \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\"" Sep 4 17:32:18.836202 containerd[1440]: time="2024-09-04T17:32:18.836173988Z" level=info msg="RemoveContainer for \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\" returns successfully" Sep 4 17:32:18.836348 kubelet[2543]: I0904 17:32:18.836331 2543 scope.go:117] "RemoveContainer" containerID="6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665" Sep 4 17:32:18.837173 containerd[1440]: time="2024-09-04T17:32:18.837147206Z" level=info msg="RemoveContainer for \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\"" Sep 4 17:32:18.842595 containerd[1440]: time="2024-09-04T17:32:18.842508335Z" level=info msg="RemoveContainer for \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\" returns successfully" Sep 4 17:32:18.842856 kubelet[2543]: I0904 17:32:18.842828 2543 scope.go:117] "RemoveContainer" containerID="000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e" Sep 4 17:32:18.843853 containerd[1440]: time="2024-09-04T17:32:18.843818667Z" level=info msg="RemoveContainer for \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\"" Sep 4 17:32:18.847104 containerd[1440]: time="2024-09-04T17:32:18.847075056Z" level=info msg="RemoveContainer for \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\" returns successfully" Sep 4 17:32:18.847234 kubelet[2543]: I0904 17:32:18.847208 2543 scope.go:117] "RemoveContainer" containerID="dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080" Sep 4 17:32:18.847431 containerd[1440]: time="2024-09-04T17:32:18.847390929Z" level=error msg="ContainerStatus for \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\": not found" Sep 4 17:32:18.850635 kubelet[2543]: I0904 17:32:18.850618 2543 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850698 kubelet[2543]: I0904 17:32:18.850639 2543 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd1d19fc-16e8-4cb6-86b8-8997986e1264-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850698 kubelet[2543]: I0904 17:32:18.850649 2543 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7rwxg\" (UniqueName: \"kubernetes.io/projected/020c5a8a-fe81-4412-b8af-e84894a8c192-kube-api-access-7rwxg\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850698 kubelet[2543]: I0904 17:32:18.850659 2543 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850698 kubelet[2543]: I0904 17:32:18.850668 2543 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/020c5a8a-fe81-4412-b8af-e84894a8c192-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850698 kubelet[2543]: I0904 17:32:18.850678 2543 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850698 kubelet[2543]: I0904 17:32:18.850688 2543 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850698 kubelet[2543]: I0904 17:32:18.850697 2543 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850867 kubelet[2543]: I0904 17:32:18.850705 2543 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850867 kubelet[2543]: I0904 17:32:18.850714 2543 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850867 kubelet[2543]: I0904 17:32:18.850723 2543 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850867 kubelet[2543]: I0904 17:32:18.850732 2543 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd1d19fc-16e8-4cb6-86b8-8997986e1264-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850867 kubelet[2543]: I0904 17:32:18.850741 2543 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qxcmx\" (UniqueName: \"kubernetes.io/projected/cd1d19fc-16e8-4cb6-86b8-8997986e1264-kube-api-access-qxcmx\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.850867 kubelet[2543]: I0904 17:32:18.850750 2543 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd1d19fc-16e8-4cb6-86b8-8997986e1264-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 17:32:18.853442 kubelet[2543]: E0904 17:32:18.853421 2543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\": not found" containerID="dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080" Sep 4 17:32:18.853525 kubelet[2543]: I0904 17:32:18.853511 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080"} err="failed to get container status \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\": rpc error: code = NotFound desc = an error occurred when try to find container \"dce660f267d4d3cd7080ef23a1a91c0a81dd2e602eeedc320b1d34c5c8e1d080\": not found" Sep 4 17:32:18.853525 kubelet[2543]: I0904 17:32:18.853525 2543 scope.go:117] "RemoveContainer" containerID="5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485" Sep 4 17:32:18.853742 containerd[1440]: time="2024-09-04T17:32:18.853711089Z" level=error msg="ContainerStatus for \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\": not found" Sep 4 17:32:18.853844 kubelet[2543]: E0904 17:32:18.853824 2543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\": not found" containerID="5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485" Sep 4 17:32:18.853879 kubelet[2543]: I0904 17:32:18.853854 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485"} err="failed to get container status \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bd365d183c4d1a84dc7c5e819b4c81a3454227ea477198eadab88238e655485\": not found" Sep 4 17:32:18.853879 kubelet[2543]: I0904 17:32:18.853863 2543 scope.go:117] "RemoveContainer" containerID="50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da" Sep 4 17:32:18.854001 containerd[1440]: time="2024-09-04T17:32:18.853973040Z" level=error msg="ContainerStatus for \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\": not found" Sep 4 17:32:18.854084 kubelet[2543]: E0904 17:32:18.854070 2543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\": not found" containerID="50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da" Sep 4 17:32:18.854128 kubelet[2543]: I0904 17:32:18.854093 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da"} err="failed to get container status \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\": rpc error: code = NotFound desc = an error occurred when try to find container \"50ff903987e9f19ad63fe6e9e97accd1af9ad49fc5b22906715d6d7b831e61da\": not found" Sep 4 17:32:18.854128 kubelet[2543]: I0904 17:32:18.854103 2543 scope.go:117] "RemoveContainer" containerID="6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665" Sep 4 17:32:18.854277 containerd[1440]: time="2024-09-04T17:32:18.854234959Z" level=error msg="ContainerStatus for \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\": not found" Sep 4 17:32:18.854366 kubelet[2543]: E0904 17:32:18.854349 2543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\": not found" containerID="6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665" Sep 4 17:32:18.854421 kubelet[2543]: I0904 17:32:18.854372 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665"} err="failed to get container status \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\": rpc error: code = NotFound desc = an error occurred when try to find container \"6df38c288ec7fb6cd72e0ff49d8598de10454fee0c9f139644655f7a92d71665\": not found" Sep 4 17:32:18.854421 kubelet[2543]: I0904 17:32:18.854380 2543 scope.go:117] "RemoveContainer" containerID="000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e" Sep 4 17:32:18.854539 containerd[1440]: time="2024-09-04T17:32:18.854512309Z" level=error msg="ContainerStatus for \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\": not found" Sep 4 17:32:18.854650 kubelet[2543]: E0904 17:32:18.854611 2543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\": not found" containerID="000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e" Sep 4 17:32:18.854650 kubelet[2543]: I0904 17:32:18.854635 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e"} err="failed to get container status \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"000cc5fecb8d3711c109384a910d297420b314c83412e76235eb097f97e4ee6e\": not found" Sep 4 17:32:18.854650 kubelet[2543]: I0904 17:32:18.854644 2543 scope.go:117] "RemoveContainer" containerID="5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862" Sep 4 17:32:18.855623 containerd[1440]: time="2024-09-04T17:32:18.855593273Z" level=info msg="RemoveContainer for \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\"" Sep 4 17:32:18.858791 containerd[1440]: time="2024-09-04T17:32:18.858757417Z" level=info msg="RemoveContainer for \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\" returns successfully" Sep 4 17:32:18.858911 kubelet[2543]: I0904 17:32:18.858882 2543 scope.go:117] "RemoveContainer" containerID="5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862" Sep 4 17:32:18.859061 containerd[1440]: time="2024-09-04T17:32:18.859013846Z" level=error msg="ContainerStatus for \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\": not found" Sep 4 17:32:18.859124 kubelet[2543]: E0904 17:32:18.859108 2543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\": not found" containerID="5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862" Sep 4 17:32:18.859158 kubelet[2543]: I0904 17:32:18.859130 2543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862"} err="failed to get container status \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a01a74f4019d821c81b4e0fb9870bbb2f6e4d7cbb12cade3d5456f4eb6f4862\": not found" Sep 4 17:32:19.433992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be-rootfs.mount: Deactivated successfully. Sep 4 17:32:19.434120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e326966df8180487f4ef3ebe8fcfa42b804295c95d3380fee46babbc100d83be-shm.mount: Deactivated successfully. Sep 4 17:32:19.434207 systemd[1]: var-lib-kubelet-pods-020c5a8a\x2dfe81\x2d4412\x2db8af\x2de84894a8c192-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7rwxg.mount: Deactivated successfully. Sep 4 17:32:19.434291 systemd[1]: var-lib-kubelet-pods-cd1d19fc\x2d16e8\x2d4cb6\x2d86b8\x2d8997986e1264-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqxcmx.mount: Deactivated successfully. Sep 4 17:32:19.434371 systemd[1]: var-lib-kubelet-pods-cd1d19fc\x2d16e8\x2d4cb6\x2d86b8\x2d8997986e1264-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:32:19.434456 systemd[1]: var-lib-kubelet-pods-cd1d19fc\x2d16e8\x2d4cb6\x2d86b8\x2d8997986e1264-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:32:19.639219 kubelet[2543]: E0904 17:32:19.639173 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:19.641373 kubelet[2543]: I0904 17:32:19.641345 2543 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="020c5a8a-fe81-4412-b8af-e84894a8c192" path="/var/lib/kubelet/pods/020c5a8a-fe81-4412-b8af-e84894a8c192/volumes" Sep 4 17:32:19.642243 kubelet[2543]: I0904 17:32:19.642211 2543 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cd1d19fc-16e8-4cb6-86b8-8997986e1264" path="/var/lib/kubelet/pods/cd1d19fc-16e8-4cb6-86b8-8997986e1264/volumes" Sep 4 17:32:20.401809 sshd[4190]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:20.413705 systemd[1]: sshd@23-10.0.0.161:22-10.0.0.1:51760.service: Deactivated successfully. Sep 4 17:32:20.415823 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:32:20.417507 systemd-logind[1428]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:32:20.426159 systemd[1]: Started sshd@24-10.0.0.161:22-10.0.0.1:51774.service - OpenSSH per-connection server daemon (10.0.0.1:51774). Sep 4 17:32:20.427069 systemd-logind[1428]: Removed session 24. Sep 4 17:32:20.461967 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 51774 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:32:20.463472 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:20.467451 systemd-logind[1428]: New session 25 of user core. Sep 4 17:32:20.476904 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:32:21.341311 sshd[4350]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:21.348545 systemd[1]: sshd@24-10.0.0.161:22-10.0.0.1:51774.service: Deactivated successfully. Sep 4 17:32:21.350329 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:32:21.352031 systemd-logind[1428]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:32:21.360007 systemd[1]: Started sshd@25-10.0.0.161:22-10.0.0.1:51776.service - OpenSSH per-connection server daemon (10.0.0.1:51776). Sep 4 17:32:21.361164 systemd-logind[1428]: Removed session 25. Sep 4 17:32:21.394316 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 51776 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:32:21.395835 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:21.399803 systemd-logind[1428]: New session 26 of user core. Sep 4 17:32:21.415898 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:32:21.467338 sshd[4363]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:21.482797 systemd[1]: sshd@25-10.0.0.161:22-10.0.0.1:51776.service: Deactivated successfully. Sep 4 17:32:21.484986 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:32:21.486682 systemd-logind[1428]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:32:21.497029 systemd[1]: Started sshd@26-10.0.0.161:22-10.0.0.1:51782.service - OpenSSH per-connection server daemon (10.0.0.1:51782). Sep 4 17:32:21.497915 systemd-logind[1428]: Removed session 26. Sep 4 17:32:21.530747 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 51782 ssh2: RSA SHA256:F28rWYKmlRLaaLngTatJxElJeb4TR248U8nI6dv5iIw Sep 4 17:32:21.532418 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:21.536647 systemd-logind[1428]: New session 27 of user core. Sep 4 17:32:21.543898 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:32:21.639463 kubelet[2543]: E0904 17:32:21.638632 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:21.689682 kubelet[2543]: E0904 17:32:21.689641 2543 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:32:21.791454 kubelet[2543]: I0904 17:32:21.791400 2543 topology_manager.go:215] "Topology Admit Handler" podUID="0c491e7b-9094-4557-849d-ad75251fe790" podNamespace="kube-system" podName="cilium-tl6b9" Sep 4 17:32:21.791602 kubelet[2543]: E0904 17:32:21.791485 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd1d19fc-16e8-4cb6-86b8-8997986e1264" containerName="cilium-agent" Sep 4 17:32:21.791602 kubelet[2543]: E0904 17:32:21.791496 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd1d19fc-16e8-4cb6-86b8-8997986e1264" containerName="mount-cgroup" Sep 4 17:32:21.791602 kubelet[2543]: E0904 17:32:21.791503 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="020c5a8a-fe81-4412-b8af-e84894a8c192" containerName="cilium-operator" Sep 4 17:32:21.791602 kubelet[2543]: E0904 17:32:21.791511 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd1d19fc-16e8-4cb6-86b8-8997986e1264" containerName="apply-sysctl-overwrites" Sep 4 17:32:21.791602 kubelet[2543]: E0904 17:32:21.791518 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd1d19fc-16e8-4cb6-86b8-8997986e1264" containerName="mount-bpf-fs" Sep 4 17:32:21.791602 kubelet[2543]: E0904 17:32:21.791525 2543 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd1d19fc-16e8-4cb6-86b8-8997986e1264" containerName="clean-cilium-state" Sep 4 17:32:21.791602 kubelet[2543]: I0904 17:32:21.791546 2543 memory_manager.go:354] "RemoveStaleState removing state" podUID="020c5a8a-fe81-4412-b8af-e84894a8c192" containerName="cilium-operator" Sep 4 17:32:21.791602 kubelet[2543]: I0904 17:32:21.791553 2543 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1d19fc-16e8-4cb6-86b8-8997986e1264" containerName="cilium-agent" Sep 4 17:32:21.801869 systemd[1]: Created slice kubepods-burstable-pod0c491e7b_9094_4557_849d_ad75251fe790.slice - libcontainer container kubepods-burstable-pod0c491e7b_9094_4557_849d_ad75251fe790.slice. Sep 4 17:32:21.971945 kubelet[2543]: I0904 17:32:21.971802 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-cilium-cgroup\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.971945 kubelet[2543]: I0904 17:32:21.971847 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtdvf\" (UniqueName: \"kubernetes.io/projected/0c491e7b-9094-4557-849d-ad75251fe790-kube-api-access-qtdvf\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.971945 kubelet[2543]: I0904 17:32:21.971870 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-cilium-run\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972120 kubelet[2543]: I0904 17:32:21.971954 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-bpf-maps\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972120 kubelet[2543]: I0904 17:32:21.972035 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-cni-path\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972120 kubelet[2543]: I0904 17:32:21.972076 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-hostproc\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972120 kubelet[2543]: I0904 17:32:21.972102 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-etc-cni-netd\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972120 kubelet[2543]: I0904 17:32:21.972123 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c491e7b-9094-4557-849d-ad75251fe790-clustermesh-secrets\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972255 kubelet[2543]: I0904 17:32:21.972143 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c491e7b-9094-4557-849d-ad75251fe790-cilium-ipsec-secrets\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972255 kubelet[2543]: I0904 17:32:21.972177 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-xtables-lock\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972255 kubelet[2543]: I0904 17:32:21.972205 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c491e7b-9094-4557-849d-ad75251fe790-hubble-tls\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972255 kubelet[2543]: I0904 17:32:21.972236 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-lib-modules\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972414 kubelet[2543]: I0904 17:32:21.972274 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-host-proc-sys-net\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972414 kubelet[2543]: I0904 17:32:21.972345 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c491e7b-9094-4557-849d-ad75251fe790-cilium-config-path\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:21.972414 kubelet[2543]: I0904 17:32:21.972393 2543 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c491e7b-9094-4557-849d-ad75251fe790-host-proc-sys-kernel\") pod \"cilium-tl6b9\" (UID: \"0c491e7b-9094-4557-849d-ad75251fe790\") " pod="kube-system/cilium-tl6b9" Sep 4 17:32:22.407555 kubelet[2543]: E0904 17:32:22.407516 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:22.408217 containerd[1440]: time="2024-09-04T17:32:22.408163807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tl6b9,Uid:0c491e7b-9094-4557-849d-ad75251fe790,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:22.624206 containerd[1440]: time="2024-09-04T17:32:22.624026096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:22.624975 containerd[1440]: time="2024-09-04T17:32:22.624852992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:22.624975 containerd[1440]: time="2024-09-04T17:32:22.624942452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:22.624975 containerd[1440]: time="2024-09-04T17:32:22.624959315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:22.638750 kubelet[2543]: E0904 17:32:22.638709 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:22.645911 systemd[1]: Started cri-containerd-21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282.scope - libcontainer container 21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282. Sep 4 17:32:22.669984 containerd[1440]: time="2024-09-04T17:32:22.669870100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tl6b9,Uid:0c491e7b-9094-4557-849d-ad75251fe790,Namespace:kube-system,Attempt:0,} returns sandbox id \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\"" Sep 4 17:32:22.670818 kubelet[2543]: E0904 17:32:22.670762 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:22.673293 containerd[1440]: time="2024-09-04T17:32:22.673233754Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:32:22.831680 kubelet[2543]: I0904 17:32:22.831625 2543 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:32:22Z","lastTransitionTime":"2024-09-04T17:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:32:22.888299 containerd[1440]: time="2024-09-04T17:32:22.888224431Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f55c0c28b8d3209b0115a9291d9ecef2abdbd6744de3b0cbdee5a6e477cfcf8\"" Sep 4 17:32:22.888890 containerd[1440]: time="2024-09-04T17:32:22.888703535Z" level=info msg="StartContainer for \"9f55c0c28b8d3209b0115a9291d9ecef2abdbd6744de3b0cbdee5a6e477cfcf8\"" Sep 4 17:32:22.916945 systemd[1]: Started cri-containerd-9f55c0c28b8d3209b0115a9291d9ecef2abdbd6744de3b0cbdee5a6e477cfcf8.scope - libcontainer container 9f55c0c28b8d3209b0115a9291d9ecef2abdbd6744de3b0cbdee5a6e477cfcf8. Sep 4 17:32:22.958405 systemd[1]: cri-containerd-9f55c0c28b8d3209b0115a9291d9ecef2abdbd6744de3b0cbdee5a6e477cfcf8.scope: Deactivated successfully. Sep 4 17:32:22.959837 containerd[1440]: time="2024-09-04T17:32:22.959735422Z" level=info msg="StartContainer for \"9f55c0c28b8d3209b0115a9291d9ecef2abdbd6744de3b0cbdee5a6e477cfcf8\" returns successfully" Sep 4 17:32:23.051941 containerd[1440]: time="2024-09-04T17:32:23.051869798Z" level=info msg="shim disconnected" id=9f55c0c28b8d3209b0115a9291d9ecef2abdbd6744de3b0cbdee5a6e477cfcf8 namespace=k8s.io Sep 4 17:32:23.051941 containerd[1440]: time="2024-09-04T17:32:23.051941765Z" level=warning msg="cleaning up after shim disconnected" id=9f55c0c28b8d3209b0115a9291d9ecef2abdbd6744de3b0cbdee5a6e477cfcf8 namespace=k8s.io Sep 4 17:32:23.052169 containerd[1440]: time="2024-09-04T17:32:23.051951354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:23.826997 kubelet[2543]: E0904 17:32:23.826959 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:23.829255 containerd[1440]: time="2024-09-04T17:32:23.829203296Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:32:23.901321 containerd[1440]: time="2024-09-04T17:32:23.901251600Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f\"" Sep 4 17:32:23.901842 containerd[1440]: time="2024-09-04T17:32:23.901801518Z" level=info msg="StartContainer for \"fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f\"" Sep 4 17:32:23.933974 systemd[1]: Started cri-containerd-fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f.scope - libcontainer container fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f. Sep 4 17:32:23.966986 systemd[1]: cri-containerd-fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f.scope: Deactivated successfully. Sep 4 17:32:24.007833 containerd[1440]: time="2024-09-04T17:32:24.007787777Z" level=info msg="StartContainer for \"fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f\" returns successfully" Sep 4 17:32:24.078691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f-rootfs.mount: Deactivated successfully. Sep 4 17:32:24.099128 containerd[1440]: time="2024-09-04T17:32:24.099056967Z" level=info msg="shim disconnected" id=fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f namespace=k8s.io Sep 4 17:32:24.099128 containerd[1440]: time="2024-09-04T17:32:24.099122172Z" level=warning msg="cleaning up after shim disconnected" id=fef3c6559d44d8675ae3c57b9808a7e5afc922bb6e8a531fe2ffeb7aa598c07f namespace=k8s.io Sep 4 17:32:24.099128 containerd[1440]: time="2024-09-04T17:32:24.099132982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:24.830411 kubelet[2543]: E0904 17:32:24.830381 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:24.832531 containerd[1440]: time="2024-09-04T17:32:24.832499554Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:32:24.850025 containerd[1440]: time="2024-09-04T17:32:24.849980598Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067\"" Sep 4 17:32:24.850518 containerd[1440]: time="2024-09-04T17:32:24.850474128Z" level=info msg="StartContainer for \"3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067\"" Sep 4 17:32:24.881959 systemd[1]: Started cri-containerd-3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067.scope - libcontainer container 3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067. Sep 4 17:32:24.911480 containerd[1440]: time="2024-09-04T17:32:24.911416521Z" level=info msg="StartContainer for \"3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067\" returns successfully" Sep 4 17:32:24.913004 systemd[1]: cri-containerd-3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067.scope: Deactivated successfully. Sep 4 17:32:24.939417 containerd[1440]: time="2024-09-04T17:32:24.939320172Z" level=info msg="shim disconnected" id=3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067 namespace=k8s.io Sep 4 17:32:24.939417 containerd[1440]: time="2024-09-04T17:32:24.939379405Z" level=warning msg="cleaning up after shim disconnected" id=3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067 namespace=k8s.io Sep 4 17:32:24.939417 containerd[1440]: time="2024-09-04T17:32:24.939388052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:25.078712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dd6573eb7112cfb35f0fcafbb7ffbb7a95b9c00f1c3ef1023b614f0321fe067-rootfs.mount: Deactivated successfully. Sep 4 17:32:25.833603 kubelet[2543]: E0904 17:32:25.833563 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:25.836141 containerd[1440]: time="2024-09-04T17:32:25.836094915Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:32:25.850213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116452173.mount: Deactivated successfully. Sep 4 17:32:25.851871 containerd[1440]: time="2024-09-04T17:32:25.851803090Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb\"" Sep 4 17:32:25.852560 containerd[1440]: time="2024-09-04T17:32:25.852256593Z" level=info msg="StartContainer for \"77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb\"" Sep 4 17:32:25.888960 systemd[1]: Started cri-containerd-77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb.scope - libcontainer container 77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb. Sep 4 17:32:25.912912 systemd[1]: cri-containerd-77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb.scope: Deactivated successfully. Sep 4 17:32:25.914819 containerd[1440]: time="2024-09-04T17:32:25.914784094Z" level=info msg="StartContainer for \"77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb\" returns successfully" Sep 4 17:32:25.938958 containerd[1440]: time="2024-09-04T17:32:25.938876868Z" level=info msg="shim disconnected" id=77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb namespace=k8s.io Sep 4 17:32:25.938958 containerd[1440]: time="2024-09-04T17:32:25.938934206Z" level=warning msg="cleaning up after shim disconnected" id=77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb namespace=k8s.io Sep 4 17:32:25.938958 containerd[1440]: time="2024-09-04T17:32:25.938943555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:32:26.079150 systemd[1]: run-containerd-runc-k8s.io-77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb-runc.8wIBHv.mount: Deactivated successfully. Sep 4 17:32:26.079296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77067b7c548f878a6b37e486b2ce27cbef29739b2e25f768a4978bd7ba2915bb-rootfs.mount: Deactivated successfully. Sep 4 17:32:26.691066 kubelet[2543]: E0904 17:32:26.691027 2543 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:32:26.837902 kubelet[2543]: E0904 17:32:26.837867 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:26.840242 containerd[1440]: time="2024-09-04T17:32:26.840192154Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:32:26.855734 containerd[1440]: time="2024-09-04T17:32:26.855682324Z" level=info msg="CreateContainer within sandbox \"21ab69bd5d7734649498e3c58c19e9cb4e4760621b33e3d8ee7f1c0557e31282\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b294c6183b4ff2474e75fd388082e628ae1f4dba0ba280265d719507818b10b\"" Sep 4 17:32:26.856252 containerd[1440]: time="2024-09-04T17:32:26.856217382Z" level=info msg="StartContainer for \"3b294c6183b4ff2474e75fd388082e628ae1f4dba0ba280265d719507818b10b\"" Sep 4 17:32:26.889903 systemd[1]: Started cri-containerd-3b294c6183b4ff2474e75fd388082e628ae1f4dba0ba280265d719507818b10b.scope - libcontainer container 3b294c6183b4ff2474e75fd388082e628ae1f4dba0ba280265d719507818b10b. Sep 4 17:32:26.920896 containerd[1440]: time="2024-09-04T17:32:26.920851782Z" level=info msg="StartContainer for \"3b294c6183b4ff2474e75fd388082e628ae1f4dba0ba280265d719507818b10b\" returns successfully" Sep 4 17:32:27.326847 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 17:32:27.842096 kubelet[2543]: E0904 17:32:27.842051 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:27.852552 kubelet[2543]: I0904 17:32:27.852514 2543 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tl6b9" podStartSLOduration=6.852472621 podStartE2EDuration="6.852472621s" podCreationTimestamp="2024-09-04 17:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:27.85213064 +0000 UTC m=+86.308903603" watchObservedRunningTime="2024-09-04 17:32:27.852472621 +0000 UTC m=+86.309245594" Sep 4 17:32:28.372090 systemd[1]: run-containerd-runc-k8s.io-3b294c6183b4ff2474e75fd388082e628ae1f4dba0ba280265d719507818b10b-runc.P8MmU5.mount: Deactivated successfully. Sep 4 17:32:28.843917 kubelet[2543]: E0904 17:32:28.843873 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:29.845815 kubelet[2543]: E0904 17:32:29.845750 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:30.357217 systemd-networkd[1374]: lxc_health: Link UP Sep 4 17:32:30.368266 systemd-networkd[1374]: lxc_health: Gained carrier Sep 4 17:32:30.848357 kubelet[2543]: E0904 17:32:30.848312 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:31.742110 systemd-networkd[1374]: lxc_health: Gained IPv6LL Sep 4 17:32:31.848922 kubelet[2543]: E0904 17:32:31.848879 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:32.641791 kubelet[2543]: E0904 17:32:32.641731 2543 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53460->127.0.0.1:40899: write tcp 127.0.0.1:53460->127.0.0.1:40899: write: broken pipe Sep 4 17:32:36.785874 systemd[1]: run-containerd-runc-k8s.io-3b294c6183b4ff2474e75fd388082e628ae1f4dba0ba280265d719507818b10b-runc.T6h1EI.mount: Deactivated successfully. Sep 4 17:32:36.832520 sshd[4371]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:36.837180 systemd[1]: sshd@26-10.0.0.161:22-10.0.0.1:51782.service: Deactivated successfully. Sep 4 17:32:36.839478 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:32:36.840280 systemd-logind[1428]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:32:36.841315 systemd-logind[1428]: Removed session 27.