Sep 4 17:28:33.937911 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:28:33.937933 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:28:33.937944 kernel: BIOS-provided physical RAM map: Sep 4 17:28:33.937951 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:28:33.937957 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:28:33.937963 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:28:33.937970 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Sep 4 17:28:33.937977 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Sep 4 17:28:33.937983 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 17:28:33.937992 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:28:33.937998 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 17:28:33.938004 kernel: NX (Execute Disable) protection: active Sep 4 17:28:33.938011 kernel: APIC: Static calls initialized Sep 4 17:28:33.938017 kernel: SMBIOS 2.8 present. Sep 4 17:28:33.938025 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 17:28:33.938034 kernel: Hypervisor detected: KVM Sep 4 17:28:33.938041 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:28:33.938052 kernel: kvm-clock: using sched offset of 2901831900 cycles Sep 4 17:28:33.938060 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:28:33.938067 kernel: tsc: Detected 2794.748 MHz processor Sep 4 17:28:33.938077 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:28:33.938084 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:28:33.938091 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Sep 4 17:28:33.938099 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:28:33.938109 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:28:33.938116 kernel: Using GB pages for direct mapping Sep 4 17:28:33.938123 kernel: ACPI: Early table checksum verification disabled Sep 4 17:28:33.938129 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Sep 4 17:28:33.938136 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:28:33.938144 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:28:33.938150 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:28:33.938157 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 17:28:33.938164 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:28:33.938174 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:28:33.938181 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:28:33.938188 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Sep 4 17:28:33.938195 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Sep 4 17:28:33.938202 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 17:28:33.938209 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Sep 4 17:28:33.938216 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Sep 4 17:28:33.938229 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Sep 4 17:28:33.938236 kernel: No NUMA configuration found Sep 4 17:28:33.938243 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Sep 4 17:28:33.938250 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Sep 4 17:28:33.938257 kernel: Zone ranges: Sep 4 17:28:33.938265 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:28:33.938272 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Sep 4 17:28:33.938282 kernel: Normal empty Sep 4 17:28:33.938289 kernel: Movable zone start for each node Sep 4 17:28:33.938296 kernel: Early memory node ranges Sep 4 17:28:33.938303 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:28:33.938310 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Sep 4 17:28:33.938326 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Sep 4 17:28:33.938333 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:28:33.938340 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:28:33.938347 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Sep 4 17:28:33.938355 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 17:28:33.938365 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:28:33.938377 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:28:33.938388 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:28:33.938401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:28:33.938412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:28:33.938426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:28:33.938439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:28:33.938450 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:28:33.938467 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:28:33.938478 kernel: TSC deadline timer available Sep 4 17:28:33.938488 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:28:33.938496 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:28:33.938506 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:28:33.938514 kernel: kvm-guest: setup PV sched yield Sep 4 17:28:33.938521 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Sep 4 17:28:33.938528 kernel: Booting paravirtualized kernel on KVM Sep 4 17:28:33.938536 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:28:33.938543 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:28:33.938558 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:28:33.938574 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:28:33.938600 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:28:33.938611 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:28:33.938625 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:28:33.938640 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:28:33.938655 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:28:33.938665 kernel: random: crng init done Sep 4 17:28:33.938676 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:28:33.938683 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:28:33.938690 kernel: Fallback order for Node 0: 0 Sep 4 17:28:33.938697 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Sep 4 17:28:33.938705 kernel: Policy zone: DMA32 Sep 4 17:28:33.938712 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:28:33.938720 kernel: Memory: 2434596K/2571756K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 136900K reserved, 0K cma-reserved) Sep 4 17:28:33.938727 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:28:33.938734 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:28:33.938744 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:28:33.938751 kernel: Dynamic Preempt: voluntary Sep 4 17:28:33.938758 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:28:33.938766 kernel: rcu: RCU event tracing is enabled. Sep 4 17:28:33.938774 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:28:33.938781 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:28:33.938789 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:28:33.938796 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:28:33.938803 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:28:33.938813 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:28:33.938820 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:28:33.938828 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:28:33.938835 kernel: Console: colour VGA+ 80x25 Sep 4 17:28:33.938842 kernel: printk: console [ttyS0] enabled Sep 4 17:28:33.938849 kernel: ACPI: Core revision 20230628 Sep 4 17:28:33.938857 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:28:33.938864 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:28:33.938871 kernel: x2apic enabled Sep 4 17:28:33.938881 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:28:33.938888 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:28:33.938899 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:28:33.938913 kernel: kvm-guest: setup PV IPIs Sep 4 17:28:33.938924 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:28:33.938934 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:28:33.938942 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 4 17:28:33.938949 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:28:33.938990 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:28:33.938998 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:28:33.939005 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:28:33.939013 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:28:33.939023 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:28:33.939031 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:28:33.939038 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:28:33.939046 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:28:33.939053 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:28:33.939064 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:28:33.939071 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:28:33.939082 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:28:33.939091 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:28:33.939102 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:28:33.939112 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:28:33.939123 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:28:33.939133 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:28:33.939146 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:28:33.939153 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:28:33.939161 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:28:33.939168 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:28:33.939176 kernel: landlock: Up and running. Sep 4 17:28:33.939184 kernel: SELinux: Initializing. Sep 4 17:28:33.939191 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:28:33.939199 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:28:33.939207 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:28:33.939217 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:28:33.939225 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:28:33.939233 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:28:33.939241 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:28:33.939248 kernel: ... version: 0 Sep 4 17:28:33.939256 kernel: ... bit width: 48 Sep 4 17:28:33.939263 kernel: ... generic registers: 6 Sep 4 17:28:33.939271 kernel: ... value mask: 0000ffffffffffff Sep 4 17:28:33.939278 kernel: ... max period: 00007fffffffffff Sep 4 17:28:33.939288 kernel: ... fixed-purpose events: 0 Sep 4 17:28:33.939296 kernel: ... event mask: 000000000000003f Sep 4 17:28:33.939303 kernel: signal: max sigframe size: 1776 Sep 4 17:28:33.939311 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:28:33.939326 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:28:33.939334 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:28:33.939341 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:28:33.939351 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:28:33.939359 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:28:33.939370 kernel: smpboot: Max logical packages: 1 Sep 4 17:28:33.939378 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 4 17:28:33.939386 kernel: devtmpfs: initialized Sep 4 17:28:33.939393 kernel: x86/mm: Memory block size: 128MB Sep 4 17:28:33.939401 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:28:33.939409 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:28:33.939416 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:28:33.939424 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:28:33.939432 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:28:33.939442 kernel: audit: type=2000 audit(1725470913.330:1): state=initialized audit_enabled=0 res=1 Sep 4 17:28:33.939449 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:28:33.939457 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:28:33.939464 kernel: cpuidle: using governor menu Sep 4 17:28:33.939472 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:28:33.939479 kernel: dca service started, version 1.12.1 Sep 4 17:28:33.939487 kernel: PCI: Using configuration type 1 for base access Sep 4 17:28:33.939495 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:28:33.939502 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:28:33.939525 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:28:33.939534 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:28:33.939541 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:28:33.939549 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:28:33.939557 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:28:33.939564 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:28:33.939572 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:28:33.939579 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:28:33.939600 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:28:33.939612 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:28:33.939620 kernel: ACPI: Interpreter enabled Sep 4 17:28:33.939627 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:28:33.939637 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:28:33.939645 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:28:33.939653 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:28:33.939660 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:28:33.939668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:28:33.939886 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:28:33.939903 kernel: acpiphp: Slot [3] registered Sep 4 17:28:33.939910 kernel: acpiphp: Slot [4] registered Sep 4 17:28:33.939918 kernel: acpiphp: Slot [5] registered Sep 4 17:28:33.939925 kernel: acpiphp: Slot [6] registered Sep 4 17:28:33.939933 kernel: acpiphp: Slot [7] registered Sep 4 17:28:33.939940 kernel: acpiphp: Slot [8] registered Sep 4 17:28:33.939948 kernel: acpiphp: Slot [9] registered Sep 4 17:28:33.939955 kernel: acpiphp: Slot [10] registered Sep 4 17:28:33.939965 kernel: acpiphp: Slot [11] registered Sep 4 17:28:33.939975 kernel: acpiphp: Slot [12] registered Sep 4 17:28:33.939986 kernel: acpiphp: Slot [13] registered Sep 4 17:28:33.939997 kernel: acpiphp: Slot [14] registered Sep 4 17:28:33.940007 kernel: acpiphp: Slot [15] registered Sep 4 17:28:33.940018 kernel: acpiphp: Slot [16] registered Sep 4 17:28:33.940025 kernel: acpiphp: Slot [17] registered Sep 4 17:28:33.940033 kernel: acpiphp: Slot [18] registered Sep 4 17:28:33.940040 kernel: acpiphp: Slot [19] registered Sep 4 17:28:33.940047 kernel: acpiphp: Slot [20] registered Sep 4 17:28:33.940058 kernel: acpiphp: Slot [21] registered Sep 4 17:28:33.940066 kernel: acpiphp: Slot [22] registered Sep 4 17:28:33.940073 kernel: acpiphp: Slot [23] registered Sep 4 17:28:33.940080 kernel: acpiphp: Slot [24] registered Sep 4 17:28:33.940088 kernel: acpiphp: Slot [25] registered Sep 4 17:28:33.940095 kernel: acpiphp: Slot [26] registered Sep 4 17:28:33.940103 kernel: acpiphp: Slot [27] registered Sep 4 17:28:33.940121 kernel: acpiphp: Slot [28] registered Sep 4 17:28:33.940139 kernel: acpiphp: Slot [29] registered Sep 4 17:28:33.940168 kernel: acpiphp: Slot [30] registered Sep 4 17:28:33.940191 kernel: acpiphp: Slot [31] registered Sep 4 17:28:33.940202 kernel: PCI host bridge to bus 0000:00 Sep 4 17:28:33.940394 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:28:33.940521 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:28:33.940663 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:28:33.940784 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:28:33.940901 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 17:28:33.941024 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:28:33.941254 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:28:33.941415 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:28:33.941562 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:28:33.941710 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:28:33.941841 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:28:33.941974 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:28:33.942102 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:28:33.942245 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:28:33.942405 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:28:33.942536 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 17:28:33.942692 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 17:28:33.942835 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:28:33.942970 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 4 17:28:33.943099 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 4 17:28:33.943233 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 4 17:28:33.943388 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:28:33.943534 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:28:33.943688 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:28:33.943823 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 4 17:28:33.943953 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 17:28:33.944096 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:28:33.944226 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:28:33.944389 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 4 17:28:33.944535 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 17:28:33.944708 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:28:33.944850 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:28:33.944980 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 4 17:28:33.945116 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 17:28:33.945248 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 4 17:28:33.945259 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:28:33.945267 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:28:33.945275 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:28:33.945282 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:28:33.945290 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:28:33.945302 kernel: iommu: Default domain type: Translated Sep 4 17:28:33.945309 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:28:33.945317 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:28:33.945333 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:28:33.945341 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:28:33.945348 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Sep 4 17:28:33.945489 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:28:33.945698 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:28:33.945833 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:28:33.945844 kernel: vgaarb: loaded Sep 4 17:28:33.945852 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:28:33.945860 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:28:33.945867 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:28:33.945876 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:28:33.945884 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:28:33.945891 kernel: pnp: PnP ACPI init Sep 4 17:28:33.946053 kernel: pnp 00:02: [dma 2] Sep 4 17:28:33.946069 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:28:33.946077 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:28:33.946085 kernel: NET: Registered PF_INET protocol family Sep 4 17:28:33.946093 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:28:33.946101 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:28:33.946109 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:28:33.946117 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:28:33.946124 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:28:33.946135 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:28:33.946143 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:28:33.946151 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:28:33.946158 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:28:33.946166 kernel: NET: Registered PF_XDP protocol family Sep 4 17:28:33.946284 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:28:33.946410 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:28:33.946527 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:28:33.946658 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:28:33.946781 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 17:28:33.946910 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:28:33.947038 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:28:33.947050 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:28:33.947058 kernel: Initialise system trusted keyrings Sep 4 17:28:33.947065 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:28:33.947073 kernel: Key type asymmetric registered Sep 4 17:28:33.947081 kernel: Asymmetric key parser 'x509' registered Sep 4 17:28:33.947092 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:28:33.947100 kernel: io scheduler mq-deadline registered Sep 4 17:28:33.947107 kernel: io scheduler kyber registered Sep 4 17:28:33.947115 kernel: io scheduler bfq registered Sep 4 17:28:33.947122 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:28:33.947131 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:28:33.947138 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:28:33.947146 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:28:33.947154 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:28:33.947164 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:28:33.947172 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:28:33.947180 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:28:33.947187 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:28:33.947337 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:28:33.947350 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:28:33.947469 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:28:33.947678 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:28:33 UTC (1725470913) Sep 4 17:28:33.947816 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:28:33.947827 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:28:33.947835 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:28:33.947843 kernel: Segment Routing with IPv6 Sep 4 17:28:33.947851 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:28:33.947859 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:28:33.947866 kernel: Key type dns_resolver registered Sep 4 17:28:33.947874 kernel: IPI shorthand broadcast: enabled Sep 4 17:28:33.947882 kernel: sched_clock: Marking stable (836003487, 105226990)->(964490184, -23259707) Sep 4 17:28:33.947893 kernel: registered taskstats version 1 Sep 4 17:28:33.947901 kernel: Loading compiled-in X.509 certificates Sep 4 17:28:33.947909 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:28:33.947917 kernel: Key type .fscrypt registered Sep 4 17:28:33.947925 kernel: Key type fscrypt-provisioning registered Sep 4 17:28:33.947934 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:28:33.947941 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:28:33.947949 kernel: ima: No architecture policies found Sep 4 17:28:33.947959 kernel: clk: Disabling unused clocks Sep 4 17:28:33.947967 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:28:33.947974 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:28:33.947982 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:28:33.947992 kernel: Run /init as init process Sep 4 17:28:33.948000 kernel: with arguments: Sep 4 17:28:33.948010 kernel: /init Sep 4 17:28:33.948018 kernel: with environment: Sep 4 17:28:33.948044 kernel: HOME=/ Sep 4 17:28:33.948054 kernel: TERM=linux Sep 4 17:28:33.948065 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:28:33.948075 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:28:33.948086 systemd[1]: Detected virtualization kvm. Sep 4 17:28:33.948094 systemd[1]: Detected architecture x86-64. Sep 4 17:28:33.948102 systemd[1]: Running in initrd. Sep 4 17:28:33.948111 systemd[1]: No hostname configured, using default hostname. Sep 4 17:28:33.948119 systemd[1]: Hostname set to . Sep 4 17:28:33.948131 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:28:33.948139 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:28:33.948147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:28:33.948156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:28:33.948166 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:28:33.948174 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:28:33.948183 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:28:33.948194 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:28:33.948205 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:28:33.948213 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:28:33.948222 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:28:33.948230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:28:33.948239 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:28:33.948247 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:28:33.948258 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:28:33.948266 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:28:33.948275 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:28:33.948284 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:28:33.948292 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:28:33.948301 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:28:33.948310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:28:33.948326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:28:33.948335 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:28:33.948346 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:28:33.948355 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:28:33.948364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:28:33.948373 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:28:33.948381 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:28:33.948392 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:28:33.948401 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:28:33.948410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:28:33.948418 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:28:33.948428 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:28:33.948445 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:28:33.948493 systemd-journald[193]: Collecting audit messages is disabled. Sep 4 17:28:33.948514 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:28:33.948522 systemd-journald[193]: Journal started Sep 4 17:28:33.948544 systemd-journald[193]: Runtime Journal (/run/log/journal/087d01de42474999837ec5bcc2b27325) is 6.0M, max 48.4M, 42.3M free. Sep 4 17:28:33.939008 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 17:28:33.973845 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:28:33.973871 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:28:33.973885 kernel: Bridge firewalling registered Sep 4 17:28:33.965609 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 17:28:33.974218 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:28:33.976841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:28:33.989822 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:28:33.990582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:28:33.994779 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:28:33.997499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:28:34.003813 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:28:34.007775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:28:34.010717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:28:34.012063 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:28:34.015988 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:28:34.018148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:28:34.020866 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:28:34.033111 dracut-cmdline[227]: dracut-dracut-053 Sep 4 17:28:34.036274 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:28:34.054040 systemd-resolved[228]: Positive Trust Anchors: Sep 4 17:28:34.054059 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:28:34.054090 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:28:34.056898 systemd-resolved[228]: Defaulting to hostname 'linux'. Sep 4 17:28:34.058181 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:28:34.063937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:28:34.142627 kernel: SCSI subsystem initialized Sep 4 17:28:34.153615 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:28:34.164614 kernel: iscsi: registered transport (tcp) Sep 4 17:28:34.188641 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:28:34.188724 kernel: QLogic iSCSI HBA Driver Sep 4 17:28:34.248152 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:28:34.256763 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:28:34.285433 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:28:34.285527 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:28:34.285544 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:28:34.328651 kernel: raid6: avx2x4 gen() 30137 MB/s Sep 4 17:28:34.345626 kernel: raid6: avx2x2 gen() 30834 MB/s Sep 4 17:28:34.362738 kernel: raid6: avx2x1 gen() 25602 MB/s Sep 4 17:28:34.362814 kernel: raid6: using algorithm avx2x2 gen() 30834 MB/s Sep 4 17:28:34.380722 kernel: raid6: .... xor() 19812 MB/s, rmw enabled Sep 4 17:28:34.380768 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:28:34.401630 kernel: xor: automatically using best checksumming function avx Sep 4 17:28:34.556636 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:28:34.571624 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:28:34.582780 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:28:34.597342 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 4 17:28:34.602069 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:28:34.613748 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:28:34.631336 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 4 17:28:34.669608 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:28:34.678758 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:28:34.749888 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:28:34.761819 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:28:34.779757 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:28:34.779693 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:28:34.782499 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:28:34.786324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:28:34.791767 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:28:34.787760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:28:34.796692 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:28:34.808635 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:28:34.813481 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:28:34.813507 kernel: GPT:9289727 != 19775487 Sep 4 17:28:34.813518 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:28:34.813528 kernel: GPT:9289727 != 19775487 Sep 4 17:28:34.813543 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:28:34.813553 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:28:34.817008 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:28:34.817218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:28:34.830542 kernel: libata version 3.00 loaded. Sep 4 17:28:34.822492 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:28:34.823903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:28:34.824053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:28:34.825403 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:28:34.837199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:28:34.840182 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:28:34.844978 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Sep 4 17:28:34.846621 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:28:34.846647 kernel: AES CTR mode by8 optimization enabled Sep 4 17:28:34.846658 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:28:34.849619 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (460) Sep 4 17:28:34.849645 kernel: scsi host0: ata_piix Sep 4 17:28:34.852490 kernel: scsi host1: ata_piix Sep 4 17:28:34.852720 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:28:34.852733 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:28:34.854203 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:28:34.868937 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:28:34.901430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:28:34.908585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:28:34.915190 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:28:34.916517 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:28:34.928011 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:28:34.931656 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:28:34.956568 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:28:35.018693 kernel: ata2: found unknown device (class 0) Sep 4 17:28:35.019675 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:28:35.021605 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:28:35.065870 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:28:35.066244 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:28:35.078622 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:28:35.153777 disk-uuid[540]: Primary Header is updated. Sep 4 17:28:35.153777 disk-uuid[540]: Secondary Entries is updated. Sep 4 17:28:35.153777 disk-uuid[540]: Secondary Header is updated. Sep 4 17:28:35.158624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:28:36.167633 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:28:36.168095 disk-uuid[564]: The operation has completed successfully. Sep 4 17:28:36.197134 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:28:36.197273 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:28:36.225776 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:28:36.229388 sh[581]: Success Sep 4 17:28:36.242610 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:28:36.274562 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:28:36.288194 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:28:36.291309 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:28:36.305698 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:28:36.305728 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:28:36.305740 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:28:36.306756 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:28:36.308175 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:28:36.312188 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:28:36.314656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:28:36.322741 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:28:36.325336 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:28:36.333783 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:28:36.333815 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:28:36.333827 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:28:36.336615 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:28:36.346422 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:28:36.348607 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:28:36.359138 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:28:36.366813 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:28:36.431136 ignition[677]: Ignition 2.19.0 Sep 4 17:28:36.431149 ignition[677]: Stage: fetch-offline Sep 4 17:28:36.431187 ignition[677]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:28:36.431198 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:28:36.431324 ignition[677]: parsed url from cmdline: "" Sep 4 17:28:36.431330 ignition[677]: no config URL provided Sep 4 17:28:36.431337 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:28:36.431350 ignition[677]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:28:36.431385 ignition[677]: op(1): [started] loading QEMU firmware config module Sep 4 17:28:36.431392 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:28:36.439580 ignition[677]: op(1): [finished] loading QEMU firmware config module Sep 4 17:28:36.464991 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:28:36.477727 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:28:36.488729 ignition[677]: parsing config with SHA512: 99dbec62834b46047600dc0ff56e9dff0bc1e7e45203892e08dcf0c0da1059d145eb8521c1170c7845cd3d816f951ea1985c4d1f74234e8b5766e93f05c6a065 Sep 4 17:28:36.493019 unknown[677]: fetched base config from "system" Sep 4 17:28:36.493038 unknown[677]: fetched user config from "qemu" Sep 4 17:28:36.493476 ignition[677]: fetch-offline: fetch-offline passed Sep 4 17:28:36.493562 ignition[677]: Ignition finished successfully Sep 4 17:28:36.496141 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:28:36.503700 systemd-networkd[769]: lo: Link UP Sep 4 17:28:36.503711 systemd-networkd[769]: lo: Gained carrier Sep 4 17:28:36.505824 systemd-networkd[769]: Enumeration completed Sep 4 17:28:36.505942 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:28:36.506358 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:28:36.506364 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:28:36.507341 systemd-networkd[769]: eth0: Link UP Sep 4 17:28:36.507346 systemd-networkd[769]: eth0: Gained carrier Sep 4 17:28:36.507355 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:28:36.509485 systemd[1]: Reached target network.target - Network. Sep 4 17:28:36.512678 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:28:36.519762 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:28:36.524638 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:28:36.535439 ignition[772]: Ignition 2.19.0 Sep 4 17:28:36.535453 ignition[772]: Stage: kargs Sep 4 17:28:36.535691 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:28:36.535707 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:28:36.536783 ignition[772]: kargs: kargs passed Sep 4 17:28:36.536844 ignition[772]: Ignition finished successfully Sep 4 17:28:36.540217 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:28:36.549775 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:28:36.561851 ignition[781]: Ignition 2.19.0 Sep 4 17:28:36.561863 ignition[781]: Stage: disks Sep 4 17:28:36.562035 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:28:36.562047 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:28:36.565158 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:28:36.562895 ignition[781]: disks: disks passed Sep 4 17:28:36.566748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:28:36.562942 ignition[781]: Ignition finished successfully Sep 4 17:28:36.568651 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:28:36.570577 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:28:36.571652 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:28:36.573743 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:28:36.585722 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:28:36.599987 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:28:36.606704 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:28:36.625672 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:28:36.712706 kernel: EXT4-fs (vda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:28:36.713124 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:28:36.714734 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:28:36.726717 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:28:36.731534 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:28:36.734891 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:28:36.738082 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Sep 4 17:28:36.738104 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:28:36.734955 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:28:36.745358 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:28:36.745375 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:28:36.745386 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:28:36.738099 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:28:36.748316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:28:36.750289 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:28:36.764756 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:28:36.797730 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:28:36.802394 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:28:36.806597 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:28:36.811948 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:28:36.900463 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:28:36.924703 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:28:36.928444 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:28:36.932634 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:28:36.963827 ignition[914]: INFO : Ignition 2.19.0 Sep 4 17:28:36.963827 ignition[914]: INFO : Stage: mount Sep 4 17:28:36.965682 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:28:36.965682 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:28:36.965682 ignition[914]: INFO : mount: mount passed Sep 4 17:28:36.965682 ignition[914]: INFO : Ignition finished successfully Sep 4 17:28:36.969703 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:28:36.971861 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:28:36.987770 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:28:37.305075 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:28:37.319102 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:28:37.327620 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Sep 4 17:28:37.327674 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:28:37.329671 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:28:37.329697 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:28:37.333624 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:28:37.334896 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:28:37.361727 ignition[945]: INFO : Ignition 2.19.0 Sep 4 17:28:37.361727 ignition[945]: INFO : Stage: files Sep 4 17:28:37.363654 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:28:37.363654 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:28:37.363654 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:28:37.367321 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:28:37.367321 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:28:37.371015 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:28:37.372571 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:28:37.372571 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:28:37.371673 unknown[945]: wrote ssh authorized keys file for user: core Sep 4 17:28:37.377078 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:28:37.377078 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:28:37.448460 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:28:37.588084 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:28:37.588084 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:28:37.592510 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 17:28:38.050486 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:28:38.130441 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:28:38.130441 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:28:38.134329 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:28:38.221196 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:28:38.470047 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:28:38.470047 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:28:38.474011 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:28:38.476049 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:28:38.476049 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:28:38.476049 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 17:28:38.476049 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:28:38.476049 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:28:38.476049 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 17:28:38.476049 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:28:38.497427 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:28:38.501827 systemd-networkd[769]: eth0: Gained IPv6LL Sep 4 17:28:38.502893 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:28:38.502893 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:28:38.502893 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:28:38.502893 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:28:38.510160 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:28:38.510160 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:28:38.510160 ignition[945]: INFO : files: files passed Sep 4 17:28:38.510160 ignition[945]: INFO : Ignition finished successfully Sep 4 17:28:38.505906 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:28:38.512994 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:28:38.516428 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:28:38.517947 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:28:38.518059 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:28:38.551091 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:28:38.555269 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:28:38.555269 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:28:38.559153 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:28:38.562503 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:28:38.564050 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:28:38.578958 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:28:38.622718 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:28:38.622910 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:28:38.625968 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:28:38.628528 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:28:38.631166 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:28:38.632792 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:28:38.656119 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:28:38.668909 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:28:38.680189 systemd[1]: Stopped target network.target - Network. Sep 4 17:28:38.681383 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:28:38.683537 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:28:38.686128 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:28:38.688303 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:28:38.688474 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:28:38.690993 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:28:38.692680 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:28:38.694911 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:28:38.697169 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:28:38.699585 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:28:38.701814 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:28:38.704101 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:28:38.706668 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:28:38.708846 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:28:38.711276 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:28:38.713438 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:28:38.713693 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:28:38.716150 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:28:38.717763 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:28:38.720002 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:28:38.720217 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:28:38.722444 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:28:38.722647 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:28:38.725159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:28:38.725355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:28:38.727382 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:28:38.729246 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:28:38.732732 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:28:38.734327 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:28:38.736369 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:28:38.738554 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:28:38.738709 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:28:38.740513 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:28:38.740617 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:28:38.742768 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:28:38.742913 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:28:38.745627 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:28:38.745740 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:28:38.766950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:28:38.769079 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:28:38.769289 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:28:38.772795 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:28:38.774180 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:28:38.776439 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:28:38.778287 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:28:38.778522 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:28:38.784780 ignition[998]: INFO : Ignition 2.19.0 Sep 4 17:28:38.784780 ignition[998]: INFO : Stage: umount Sep 4 17:28:38.784780 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:28:38.784780 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:28:38.778755 systemd-networkd[769]: eth0: DHCPv6 lease lost Sep 4 17:28:38.802044 ignition[998]: INFO : umount: umount passed Sep 4 17:28:38.802044 ignition[998]: INFO : Ignition finished successfully Sep 4 17:28:38.782092 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:28:38.782242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:28:38.795462 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:28:38.795679 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:28:38.799494 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:28:38.799776 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:28:38.801880 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:28:38.802047 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:28:38.806398 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:28:38.806571 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:28:38.811876 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:28:38.811942 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:28:38.814541 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:28:38.814636 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:28:38.816839 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:28:38.816915 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:28:38.819440 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:28:38.819513 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:28:38.821931 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:28:38.822008 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:28:38.836764 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:28:38.838687 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:28:38.838774 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:28:38.841898 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:28:38.841958 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:28:38.843736 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:28:38.843791 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:28:38.845961 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:28:38.846015 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:28:38.848748 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:28:38.852309 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:28:38.865093 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:28:38.865311 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:28:38.867674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:28:38.867767 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:28:38.868898 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:28:38.868942 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:28:38.869210 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:28:38.869288 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:28:38.870037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:28:38.870087 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:28:38.876874 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:28:38.876929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:28:38.878799 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:28:38.881687 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:28:38.881746 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:28:38.882133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:28:38.882182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:28:38.895776 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:28:38.895943 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:28:38.899822 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:28:38.899959 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:28:39.029968 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:28:39.030116 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:28:39.032316 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:28:39.033084 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:28:39.033139 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:28:39.038865 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:28:39.046488 systemd[1]: Switching root. Sep 4 17:28:39.090494 systemd-journald[193]: Journal stopped Sep 4 17:28:40.415803 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 4 17:28:40.415897 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:28:40.415916 kernel: SELinux: policy capability open_perms=1 Sep 4 17:28:40.415939 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:28:40.415956 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:28:40.415972 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:28:40.415988 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:28:40.416004 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:28:40.416026 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:28:40.416043 kernel: audit: type=1403 audit(1725470919.560:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:28:40.416066 systemd[1]: Successfully loaded SELinux policy in 50.970ms. Sep 4 17:28:40.416094 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.975ms. Sep 4 17:28:40.416115 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:28:40.416132 systemd[1]: Detected virtualization kvm. Sep 4 17:28:40.416149 systemd[1]: Detected architecture x86-64. Sep 4 17:28:40.416182 systemd[1]: Detected first boot. Sep 4 17:28:40.416199 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:28:40.416215 zram_generator::config[1040]: No configuration found. Sep 4 17:28:40.416233 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:28:40.416250 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:28:40.416271 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:28:40.416288 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:28:40.416305 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:28:40.416329 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:28:40.416345 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:28:40.416363 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:28:40.416380 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:28:40.416396 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:28:40.416416 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:28:40.416433 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:28:40.416449 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:28:40.416467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:28:40.416484 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:28:40.416502 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:28:40.416518 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:28:40.416537 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:28:40.416556 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:28:40.416576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:28:40.416632 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:28:40.416651 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:28:40.416667 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:28:40.416698 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:28:40.416724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:28:40.416741 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:28:40.416758 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:28:40.416779 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:28:40.416796 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:28:40.416814 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:28:40.416846 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:28:40.416866 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:28:40.416883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:28:40.416900 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:28:40.416916 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:28:40.416934 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:28:40.416957 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:28:40.416974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:28:40.416990 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:28:40.417005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:28:40.417021 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:28:40.417038 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:28:40.417054 systemd[1]: Reached target machines.target - Containers. Sep 4 17:28:40.417070 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:28:40.417085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:28:40.417105 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:28:40.417122 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:28:40.417138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:28:40.417154 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:28:40.417185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:28:40.417204 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:28:40.417221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:28:40.417240 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:28:40.417264 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:28:40.417285 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:28:40.417305 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:28:40.417324 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:28:40.417341 kernel: fuse: init (API version 7.39) Sep 4 17:28:40.417358 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:28:40.417376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:28:40.417394 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:28:40.417411 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:28:40.417432 kernel: loop: module loaded Sep 4 17:28:40.417448 kernel: ACPI: bus type drm_connector registered Sep 4 17:28:40.417465 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:28:40.417483 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:28:40.417500 systemd[1]: Stopped verity-setup.service. Sep 4 17:28:40.417518 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:28:40.417563 systemd-journald[1109]: Collecting audit messages is disabled. Sep 4 17:28:40.417615 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:28:40.417642 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:28:40.417662 systemd-journald[1109]: Journal started Sep 4 17:28:40.417694 systemd-journald[1109]: Runtime Journal (/run/log/journal/087d01de42474999837ec5bcc2b27325) is 6.0M, max 48.4M, 42.3M free. Sep 4 17:28:40.154198 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:28:40.172086 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:28:40.172573 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:28:40.419953 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:28:40.420848 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:28:40.422094 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:28:40.423462 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:28:40.424803 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:28:40.426119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:28:40.427706 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:28:40.429361 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:28:40.429552 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:28:40.431262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:28:40.431449 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:28:40.432957 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:28:40.433147 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:28:40.434714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:28:40.434896 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:28:40.436487 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:28:40.436691 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:28:40.438268 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:28:40.438455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:28:40.439928 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:28:40.441408 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:28:40.443218 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:28:40.460764 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:28:40.470707 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:28:40.473272 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:28:40.474475 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:28:40.474508 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:28:40.476627 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:28:40.479692 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:28:40.482454 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:28:40.483739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:28:40.486856 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:28:40.490503 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:28:40.491739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:28:40.494070 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:28:40.495391 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:28:40.498498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:28:40.503325 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:28:40.519393 systemd-journald[1109]: Time spent on flushing to /var/log/journal/087d01de42474999837ec5bcc2b27325 is 31.205ms for 948 entries. Sep 4 17:28:40.519393 systemd-journald[1109]: System Journal (/var/log/journal/087d01de42474999837ec5bcc2b27325) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:28:40.566562 systemd-journald[1109]: Received client request to flush runtime journal. Sep 4 17:28:40.566629 kernel: loop0: detected capacity change from 0 to 140728 Sep 4 17:28:40.566664 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:28:40.511758 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:28:40.514558 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:28:40.516302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:28:40.517722 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:28:40.519281 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:28:40.522865 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:28:40.531225 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:28:40.537934 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:28:40.542886 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:28:40.550093 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:28:40.569350 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:28:40.575135 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:28:40.577783 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:28:40.578611 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:28:40.584613 kernel: loop1: detected capacity change from 0 to 211296 Sep 4 17:28:40.590520 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:28:40.600795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:28:40.621730 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 4 17:28:40.622624 kernel: loop2: detected capacity change from 0 to 89336 Sep 4 17:28:40.621754 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 4 17:28:40.629100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:28:40.669632 kernel: loop3: detected capacity change from 0 to 140728 Sep 4 17:28:40.687638 kernel: loop4: detected capacity change from 0 to 211296 Sep 4 17:28:40.697629 kernel: loop5: detected capacity change from 0 to 89336 Sep 4 17:28:40.709002 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:28:40.709656 (sd-merge)[1179]: Merged extensions into '/usr'. Sep 4 17:28:40.714897 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:28:40.714916 systemd[1]: Reloading... Sep 4 17:28:40.791919 zram_generator::config[1206]: No configuration found. Sep 4 17:28:40.955473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:28:40.973021 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:28:41.016954 systemd[1]: Reloading finished in 301 ms. Sep 4 17:28:41.053669 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:28:41.055369 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:28:41.075921 systemd[1]: Starting ensure-sysext.service... Sep 4 17:28:41.078441 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:28:41.087230 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:28:41.087354 systemd[1]: Reloading... Sep 4 17:28:41.110071 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:28:41.110529 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:28:41.111724 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:28:41.112091 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Sep 4 17:28:41.112185 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Sep 4 17:28:41.119495 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:28:41.119577 systemd-tmpfiles[1241]: Skipping /boot Sep 4 17:28:41.133699 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:28:41.133803 systemd-tmpfiles[1241]: Skipping /boot Sep 4 17:28:41.151635 zram_generator::config[1266]: No configuration found. Sep 4 17:28:41.311551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:28:41.369225 systemd[1]: Reloading finished in 281 ms. Sep 4 17:28:41.389339 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:28:41.401441 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:28:41.410692 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:28:41.413503 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:28:41.416301 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:28:41.421236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:28:41.427180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:28:41.430910 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:28:41.434756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:28:41.434930 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:28:41.436687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:28:41.441249 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:28:41.447407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:28:41.449878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:28:41.455586 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:28:41.456997 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:28:41.459021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:28:41.459833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:28:41.462476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:28:41.462773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:28:41.465106 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:28:41.465837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:28:41.469405 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Sep 4 17:28:41.469987 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:28:41.482852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:28:41.483170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:28:41.491702 augenrules[1335]: No rules Sep 4 17:28:41.491890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:28:41.494990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:28:41.502702 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:28:41.504206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:28:41.505986 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:28:41.507876 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:28:41.508847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:28:41.511674 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:28:41.513754 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:28:41.515819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:28:41.516085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:28:41.518330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:28:41.518526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:28:41.521494 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:28:41.521763 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:28:41.523671 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:28:41.533735 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:28:41.557819 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:28:41.557935 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:28:41.558124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:28:41.571637 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1352) Sep 4 17:28:41.575886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:28:41.585730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:28:41.587672 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1357) Sep 4 17:28:41.591731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:28:41.595609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:28:41.597301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:28:41.602406 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:28:41.603760 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:28:41.604440 systemd[1]: Finished ensure-sysext.service. Sep 4 17:28:41.606004 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:28:41.608609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:28:41.608883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:28:41.610851 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:28:41.611202 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:28:41.614633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:28:41.614909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:28:41.618984 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1352) Sep 4 17:28:41.618723 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:28:41.618909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:28:41.621992 systemd-resolved[1309]: Positive Trust Anchors: Sep 4 17:28:41.622018 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:28:41.622060 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:28:41.627205 systemd-resolved[1309]: Defaulting to hostname 'linux'. Sep 4 17:28:41.635142 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:28:41.644790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:28:41.646431 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:28:41.646536 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:28:41.656829 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:28:41.658016 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:28:41.664616 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 17:28:41.668618 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 17:28:41.688107 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:28:41.692635 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:28:41.699815 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:28:41.729554 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:28:41.740650 systemd-networkd[1383]: lo: Link UP Sep 4 17:28:41.742035 systemd-networkd[1383]: lo: Gained carrier Sep 4 17:28:41.744386 systemd-networkd[1383]: Enumeration completed Sep 4 17:28:41.744853 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:28:41.744858 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:28:41.750628 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:28:41.750626 systemd-networkd[1383]: eth0: Link UP Sep 4 17:28:41.750631 systemd-networkd[1383]: eth0: Gained carrier Sep 4 17:28:41.750651 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:28:41.752043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:28:41.753454 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:28:41.755199 systemd[1]: Reached target network.target - Network. Sep 4 17:28:41.758915 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:28:41.760510 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:28:41.764135 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:28:41.764629 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:28:41.775685 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:28:41.776757 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Sep 4 17:28:42.392019 systemd-resolved[1309]: Clock change detected. Flushing caches. Sep 4 17:28:42.392232 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:28:42.392373 systemd-timesyncd[1393]: Initial clock synchronization to Wed 2024-09-04 17:28:42.391864 UTC. Sep 4 17:28:42.507009 kernel: kvm_amd: TSC scaling supported Sep 4 17:28:42.507137 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:28:42.507154 kernel: kvm_amd: Nested Paging enabled Sep 4 17:28:42.507172 kernel: kvm_amd: LBR virtualization supported Sep 4 17:28:42.508132 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:28:42.508170 kernel: kvm_amd: Virtual GIF supported Sep 4 17:28:42.532827 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:28:42.555677 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:28:42.583121 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:28:42.603230 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:28:42.614342 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:28:42.643747 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:28:42.646763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:28:42.648072 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:28:42.649403 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:28:42.650759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:28:42.652388 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:28:42.653893 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:28:42.655260 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:28:42.656620 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:28:42.656658 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:28:42.657757 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:28:42.659767 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:28:42.662932 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:28:42.690200 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:28:42.692999 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:28:42.694881 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:28:42.696125 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:28:42.697172 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:28:42.698312 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:28:42.698361 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:28:42.700003 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:28:42.702954 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:28:42.706901 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:28:42.707392 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:28:42.711100 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:28:42.713567 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:28:42.716539 jq[1417]: false Sep 4 17:28:42.717523 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:28:42.721530 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:28:42.725088 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:28:42.728328 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:28:42.735435 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:28:42.737339 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:28:42.737968 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:28:42.739980 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:28:42.744978 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:28:42.747581 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:28:42.752064 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:28:42.752362 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:28:42.754559 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:28:42.756150 extend-filesystems[1418]: Found loop3 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found loop4 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found loop5 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found sr0 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found vda Sep 4 17:28:42.756150 extend-filesystems[1418]: Found vda1 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found vda2 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found vda3 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found usr Sep 4 17:28:42.756150 extend-filesystems[1418]: Found vda4 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found vda6 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found vda7 Sep 4 17:28:42.756150 extend-filesystems[1418]: Found vda9 Sep 4 17:28:42.756150 extend-filesystems[1418]: Checking size of /dev/vda9 Sep 4 17:28:42.754839 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:28:42.775001 jq[1429]: true Sep 4 17:28:42.774020 dbus-daemon[1416]: [system] SELinux support is enabled Sep 4 17:28:42.762333 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:28:42.762627 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:28:42.774644 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:28:42.781248 update_engine[1426]: I0904 17:28:42.781193 1426 main.cc:92] Flatcar Update Engine starting Sep 4 17:28:42.782235 extend-filesystems[1418]: Resized partition /dev/vda9 Sep 4 17:28:42.787020 update_engine[1426]: I0904 17:28:42.786705 1426 update_check_scheduler.cc:74] Next update check in 6m10s Sep 4 17:28:42.788386 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:28:42.796828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1347) Sep 4 17:28:42.800878 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:28:42.801093 tar[1435]: linux-amd64/helm Sep 4 17:28:42.800912 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:28:42.804265 jq[1447]: true Sep 4 17:28:42.802891 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:28:42.802909 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:28:42.816910 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:28:42.821704 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:28:42.825001 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:28:42.839816 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:28:42.843735 systemd-logind[1425]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:28:42.843780 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:28:42.844068 systemd-logind[1425]: New seat seat0. Sep 4 17:28:42.844860 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:28:42.995323 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:28:43.037290 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:28:43.064731 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:28:43.075013 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:28:43.082807 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:28:43.083037 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:28:43.095405 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:28:43.098816 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:28:43.107912 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:28:43.117181 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:28:43.119809 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:28:43.121322 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:28:43.269692 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:28:43.269692 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:28:43.269692 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:28:43.274681 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Sep 4 17:28:43.270927 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:28:43.271230 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:28:43.277664 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:28:43.279640 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:28:43.281855 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:28:43.321094 containerd[1450]: time="2024-09-04T17:28:43.320950971Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:28:43.338702 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:28:43.341399 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:51556.service - OpenSSH per-connection server daemon (10.0.0.1:51556). Sep 4 17:28:43.347718 containerd[1450]: time="2024-09-04T17:28:43.347654419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:28:43.349345 containerd[1450]: time="2024-09-04T17:28:43.349293042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:28:43.349345 containerd[1450]: time="2024-09-04T17:28:43.349341563Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:28:43.349426 containerd[1450]: time="2024-09-04T17:28:43.349360959Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:28:43.349636 containerd[1450]: time="2024-09-04T17:28:43.349594848Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:28:43.349636 containerd[1450]: time="2024-09-04T17:28:43.349625325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:28:43.349746 containerd[1450]: time="2024-09-04T17:28:43.349717789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:28:43.349799 containerd[1450]: time="2024-09-04T17:28:43.349745501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:28:43.350034 containerd[1450]: time="2024-09-04T17:28:43.350002212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:28:43.350034 containerd[1450]: time="2024-09-04T17:28:43.350030074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:28:43.350111 containerd[1450]: time="2024-09-04T17:28:43.350048579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:28:43.350111 containerd[1450]: time="2024-09-04T17:28:43.350061092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:28:43.350214 containerd[1450]: time="2024-09-04T17:28:43.350188852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:28:43.350584 containerd[1450]: time="2024-09-04T17:28:43.350516346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:28:43.351515 containerd[1450]: time="2024-09-04T17:28:43.350664895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:28:43.351515 containerd[1450]: time="2024-09-04T17:28:43.350683440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:28:43.354805 containerd[1450]: time="2024-09-04T17:28:43.351778433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:28:43.354805 containerd[1450]: time="2024-09-04T17:28:43.351886436Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:28:43.380062 tar[1435]: linux-amd64/LICENSE Sep 4 17:28:43.380193 tar[1435]: linux-amd64/README.md Sep 4 17:28:43.393307 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:28:43.446865 sshd[1503]: Accepted publickey for core from 10.0.0.1 port 51556 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:43.449327 sshd[1503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:43.458412 containerd[1450]: time="2024-09-04T17:28:43.458348129Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:28:43.458540 containerd[1450]: time="2024-09-04T17:28:43.458438559Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:28:43.458540 containerd[1450]: time="2024-09-04T17:28:43.458462213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:28:43.458540 containerd[1450]: time="2024-09-04T17:28:43.458481529Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:28:43.458540 containerd[1450]: time="2024-09-04T17:28:43.458500836Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:28:43.458761 containerd[1450]: time="2024-09-04T17:28:43.458733021Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:28:43.459195 containerd[1450]: time="2024-09-04T17:28:43.459149252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:28:43.459439 containerd[1450]: time="2024-09-04T17:28:43.459416844Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:28:43.459439 containerd[1450]: time="2024-09-04T17:28:43.459439055Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:28:43.459539 containerd[1450]: time="2024-09-04T17:28:43.459453142Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:28:43.459539 containerd[1450]: time="2024-09-04T17:28:43.459467739Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:28:43.459539 containerd[1450]: time="2024-09-04T17:28:43.459481375Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:28:43.459539 containerd[1450]: time="2024-09-04T17:28:43.459493507Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:28:43.459539 containerd[1450]: time="2024-09-04T17:28:43.459508045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:28:43.459539 containerd[1450]: time="2024-09-04T17:28:43.459539684Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459554592Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459567095Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459579068Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459600288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459613482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459626647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459639130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459651734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459665480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459677813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.459697 containerd[1450]: time="2024-09-04T17:28:43.459690527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459708210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459729430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459742053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459753184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459767191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459808608Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459834757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459847902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459858422Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459920107Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459942720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459955594Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:28:43.460012 containerd[1450]: time="2024-09-04T17:28:43.459968799Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:28:43.460354 containerd[1450]: time="2024-09-04T17:28:43.459980501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460354 containerd[1450]: time="2024-09-04T17:28:43.459997382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:28:43.460354 containerd[1450]: time="2024-09-04T17:28:43.460010487Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:28:43.460354 containerd[1450]: time="2024-09-04T17:28:43.460024273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:28:43.460481 containerd[1450]: time="2024-09-04T17:28:43.460338552Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:28:43.460481 containerd[1450]: time="2024-09-04T17:28:43.460402161Z" level=info msg="Connect containerd service" Sep 4 17:28:43.460481 containerd[1450]: time="2024-09-04T17:28:43.460433560Z" level=info msg="using legacy CRI server" Sep 4 17:28:43.460481 containerd[1450]: time="2024-09-04T17:28:43.460441746Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:28:43.460747 containerd[1450]: time="2024-09-04T17:28:43.460536714Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:28:43.460589 systemd-logind[1425]: New session 1 of user core. Sep 4 17:28:43.461290 containerd[1450]: time="2024-09-04T17:28:43.461259118Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:28:43.461661 containerd[1450]: time="2024-09-04T17:28:43.461624524Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:28:43.461878 containerd[1450]: time="2024-09-04T17:28:43.461680429Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:28:43.461878 containerd[1450]: time="2024-09-04T17:28:43.461718009Z" level=info msg="Start subscribing containerd event" Sep 4 17:28:43.461878 containerd[1450]: time="2024-09-04T17:28:43.461755760Z" level=info msg="Start recovering state" Sep 4 17:28:43.461878 containerd[1450]: time="2024-09-04T17:28:43.461840238Z" level=info msg="Start event monitor" Sep 4 17:28:43.461878 containerd[1450]: time="2024-09-04T17:28:43.461858192Z" level=info msg="Start snapshots syncer" Sep 4 17:28:43.461878 containerd[1450]: time="2024-09-04T17:28:43.461868010Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:28:43.461878 containerd[1450]: time="2024-09-04T17:28:43.461876506Z" level=info msg="Start streaming server" Sep 4 17:28:43.462124 containerd[1450]: time="2024-09-04T17:28:43.461931930Z" level=info msg="containerd successfully booted in 0.142346s" Sep 4 17:28:43.462426 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:28:43.472168 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:28:43.474185 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:28:43.491236 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:28:43.508161 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:28:43.514223 (systemd)[1512]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:43.665443 systemd[1512]: Queued start job for default target default.target. Sep 4 17:28:43.674483 systemd[1512]: Created slice app.slice - User Application Slice. Sep 4 17:28:43.674520 systemd[1512]: Reached target paths.target - Paths. Sep 4 17:28:43.674539 systemd[1512]: Reached target timers.target - Timers. Sep 4 17:28:43.676564 systemd[1512]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:28:43.690718 systemd[1512]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:28:43.690950 systemd[1512]: Reached target sockets.target - Sockets. Sep 4 17:28:43.690979 systemd[1512]: Reached target basic.target - Basic System. Sep 4 17:28:43.691043 systemd[1512]: Reached target default.target - Main User Target. Sep 4 17:28:43.691094 systemd[1512]: Startup finished in 166ms. Sep 4 17:28:43.691708 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:28:43.722017 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:28:43.800879 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:51564.service - OpenSSH per-connection server daemon (10.0.0.1:51564). Sep 4 17:28:43.848401 sshd[1523]: Accepted publickey for core from 10.0.0.1 port 51564 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:43.850327 sshd[1523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:43.855869 systemd-logind[1425]: New session 2 of user core. Sep 4 17:28:43.865953 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:28:43.929470 sshd[1523]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:43.946997 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:51564.service: Deactivated successfully. Sep 4 17:28:43.949002 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:28:43.950515 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:28:43.962320 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:51568.service - OpenSSH per-connection server daemon (10.0.0.1:51568). Sep 4 17:28:43.964774 systemd-logind[1425]: Removed session 2. Sep 4 17:28:43.980861 systemd-networkd[1383]: eth0: Gained IPv6LL Sep 4 17:28:43.984504 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:28:43.986376 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:28:43.989689 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 51568 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:43.991418 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:43.998182 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:28:44.012330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:28:44.014703 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:28:44.023396 systemd-logind[1425]: New session 3 of user core. Sep 4 17:28:44.026455 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:28:44.037587 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:28:44.038004 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:28:44.040222 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:28:44.043394 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:28:44.087689 sshd[1530]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:44.091091 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:51568.service: Deactivated successfully. Sep 4 17:28:44.093893 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:28:44.097291 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:28:44.102064 systemd-logind[1425]: Removed session 3. Sep 4 17:28:44.766688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:44.769070 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:28:44.773842 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:28:44.774755 systemd[1]: Startup finished in 982ms (kernel) + 5.801s (initrd) + 4.650s (userspace) = 11.434s. Sep 4 17:28:45.306638 kubelet[1559]: E0904 17:28:45.306459 1559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:28:45.311612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:28:45.311868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:28:45.312319 systemd[1]: kubelet.service: Consumed 1.072s CPU time. Sep 4 17:28:54.102509 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:38270.service - OpenSSH per-connection server daemon (10.0.0.1:38270). Sep 4 17:28:54.140397 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 38270 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:54.142752 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:54.147606 systemd-logind[1425]: New session 4 of user core. Sep 4 17:28:54.160962 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:28:54.218258 sshd[1574]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:54.230310 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:38270.service: Deactivated successfully. Sep 4 17:28:54.232571 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:28:54.234300 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:28:54.248422 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:38276.service - OpenSSH per-connection server daemon (10.0.0.1:38276). Sep 4 17:28:54.249542 systemd-logind[1425]: Removed session 4. Sep 4 17:28:54.278272 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 38276 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:54.279756 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:54.284452 systemd-logind[1425]: New session 5 of user core. Sep 4 17:28:54.297977 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:28:54.347480 sshd[1581]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:54.361958 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:38276.service: Deactivated successfully. Sep 4 17:28:54.364144 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:28:54.365644 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:28:54.376320 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:38286.service - OpenSSH per-connection server daemon (10.0.0.1:38286). Sep 4 17:28:54.377457 systemd-logind[1425]: Removed session 5. Sep 4 17:28:54.406422 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 38286 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:54.408255 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:54.412594 systemd-logind[1425]: New session 6 of user core. Sep 4 17:28:54.421952 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:28:54.477416 sshd[1589]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:54.504304 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:38286.service: Deactivated successfully. Sep 4 17:28:54.506962 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:28:54.508757 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:28:54.510390 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:38294.service - OpenSSH per-connection server daemon (10.0.0.1:38294). Sep 4 17:28:54.511374 systemd-logind[1425]: Removed session 6. Sep 4 17:28:54.543211 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 38294 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:54.544901 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:54.548943 systemd-logind[1425]: New session 7 of user core. Sep 4 17:28:54.562896 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:28:54.623817 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:28:54.624185 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:28:54.642686 sudo[1599]: pam_unix(sudo:session): session closed for user root Sep 4 17:28:54.644968 sshd[1596]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:54.654993 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:38294.service: Deactivated successfully. Sep 4 17:28:54.657104 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:28:54.659052 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:28:54.674104 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:38296.service - OpenSSH per-connection server daemon (10.0.0.1:38296). Sep 4 17:28:54.675089 systemd-logind[1425]: Removed session 7. Sep 4 17:28:54.705755 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 38296 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:54.707653 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:54.711924 systemd-logind[1425]: New session 8 of user core. Sep 4 17:28:54.721919 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:28:54.776326 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:28:54.776685 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:28:54.780992 sudo[1608]: pam_unix(sudo:session): session closed for user root Sep 4 17:28:54.787314 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:28:54.787662 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:28:54.809077 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:28:54.810758 auditctl[1611]: No rules Sep 4 17:28:54.812248 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:28:54.812537 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:28:54.814363 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:28:54.846294 augenrules[1629]: No rules Sep 4 17:28:54.848162 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:28:54.849649 sudo[1607]: pam_unix(sudo:session): session closed for user root Sep 4 17:28:54.851538 sshd[1604]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:54.862712 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:38296.service: Deactivated successfully. Sep 4 17:28:54.864814 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:28:54.866588 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:28:54.867950 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:38308.service - OpenSSH per-connection server daemon (10.0.0.1:38308). Sep 4 17:28:54.868701 systemd-logind[1425]: Removed session 8. Sep 4 17:28:54.912697 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 38308 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:28:54.914352 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:28:54.918388 systemd-logind[1425]: New session 9 of user core. Sep 4 17:28:54.927985 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:28:54.983567 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:28:54.984060 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:28:55.100039 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:28:55.100243 (dockerd)[1650]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:28:55.562098 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:28:55.617144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:28:55.666873 dockerd[1650]: time="2024-09-04T17:28:55.666763726Z" level=info msg="Starting up" Sep 4 17:28:55.797097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:28:55.801704 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:28:56.441058 kubelet[1681]: E0904 17:28:56.440913 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:28:56.448946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:28:56.449186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:28:56.682378 systemd[1]: var-lib-docker-metacopy\x2dcheck2823380083-merged.mount: Deactivated successfully. Sep 4 17:28:56.731937 dockerd[1650]: time="2024-09-04T17:28:56.731753422Z" level=info msg="Loading containers: start." Sep 4 17:28:56.898828 kernel: Initializing XFRM netlink socket Sep 4 17:28:56.979471 systemd-networkd[1383]: docker0: Link UP Sep 4 17:28:57.002483 dockerd[1650]: time="2024-09-04T17:28:57.002369724Z" level=info msg="Loading containers: done." Sep 4 17:28:57.021068 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2128249643-merged.mount: Deactivated successfully. Sep 4 17:28:57.023598 dockerd[1650]: time="2024-09-04T17:28:57.023543503Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:28:57.023682 dockerd[1650]: time="2024-09-04T17:28:57.023667024Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:28:57.023821 dockerd[1650]: time="2024-09-04T17:28:57.023780918Z" level=info msg="Daemon has completed initialization" Sep 4 17:28:57.067819 dockerd[1650]: time="2024-09-04T17:28:57.067665838Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:28:57.067941 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:28:57.967488 containerd[1450]: time="2024-09-04T17:28:57.967431329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:29:00.337351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272253630.mount: Deactivated successfully. Sep 4 17:29:02.380926 containerd[1450]: time="2024-09-04T17:29:02.380867237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:02.382751 containerd[1450]: time="2024-09-04T17:29:02.382708410Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232949" Sep 4 17:29:02.384099 containerd[1450]: time="2024-09-04T17:29:02.384070735Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:02.386868 containerd[1450]: time="2024-09-04T17:29:02.386822285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:02.387936 containerd[1450]: time="2024-09-04T17:29:02.387903693Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 4.42042259s" Sep 4 17:29:02.387998 containerd[1450]: time="2024-09-04T17:29:02.387936645Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 17:29:02.409915 containerd[1450]: time="2024-09-04T17:29:02.409864447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:29:04.422841 containerd[1450]: time="2024-09-04T17:29:04.422749064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:04.424181 containerd[1450]: time="2024-09-04T17:29:04.424146024Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206206" Sep 4 17:29:04.425805 containerd[1450]: time="2024-09-04T17:29:04.425757516Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:04.429380 containerd[1450]: time="2024-09-04T17:29:04.429335777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:04.430692 containerd[1450]: time="2024-09-04T17:29:04.430653638Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 2.020750789s" Sep 4 17:29:04.430799 containerd[1450]: time="2024-09-04T17:29:04.430691139Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 17:29:04.457526 containerd[1450]: time="2024-09-04T17:29:04.457460210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:29:05.686232 containerd[1450]: time="2024-09-04T17:29:05.686160792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:05.687431 containerd[1450]: time="2024-09-04T17:29:05.687390959Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321507" Sep 4 17:29:05.688899 containerd[1450]: time="2024-09-04T17:29:05.688859553Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:05.692177 containerd[1450]: time="2024-09-04T17:29:05.692125769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:05.693295 containerd[1450]: time="2024-09-04T17:29:05.693259084Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.235751665s" Sep 4 17:29:05.693295 containerd[1450]: time="2024-09-04T17:29:05.693294571Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 17:29:05.716186 containerd[1450]: time="2024-09-04T17:29:05.716147268Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:29:06.627422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:29:06.634018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:06.755821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800520672.mount: Deactivated successfully. Sep 4 17:29:06.791628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:06.797060 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:29:07.038660 kubelet[1914]: E0904 17:29:07.038458 1914 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:29:07.042766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:29:07.042976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:29:07.821467 containerd[1450]: time="2024-09-04T17:29:07.821388857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:07.855852 containerd[1450]: time="2024-09-04T17:29:07.855777919Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600380" Sep 4 17:29:07.890721 containerd[1450]: time="2024-09-04T17:29:07.890615562Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:07.933975 containerd[1450]: time="2024-09-04T17:29:07.933940472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:07.934549 containerd[1450]: time="2024-09-04T17:29:07.934522403Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 2.218347193s" Sep 4 17:29:07.934583 containerd[1450]: time="2024-09-04T17:29:07.934551798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 17:29:07.957922 containerd[1450]: time="2024-09-04T17:29:07.957878314Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:29:08.507358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454182612.mount: Deactivated successfully. Sep 4 17:29:09.164434 containerd[1450]: time="2024-09-04T17:29:09.164372671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:09.165106 containerd[1450]: time="2024-09-04T17:29:09.165017871Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 17:29:09.168113 containerd[1450]: time="2024-09-04T17:29:09.168078731Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:09.171174 containerd[1450]: time="2024-09-04T17:29:09.171114765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:09.172260 containerd[1450]: time="2024-09-04T17:29:09.172217022Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.214303192s" Sep 4 17:29:09.172260 containerd[1450]: time="2024-09-04T17:29:09.172255083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:29:09.195821 containerd[1450]: time="2024-09-04T17:29:09.195772176Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:29:09.662280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111656861.mount: Deactivated successfully. Sep 4 17:29:09.669506 containerd[1450]: time="2024-09-04T17:29:09.669453899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:09.670211 containerd[1450]: time="2024-09-04T17:29:09.670165243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:29:09.671621 containerd[1450]: time="2024-09-04T17:29:09.671578323Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:09.673774 containerd[1450]: time="2024-09-04T17:29:09.673740117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:09.674586 containerd[1450]: time="2024-09-04T17:29:09.674554815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 478.736592ms" Sep 4 17:29:09.674626 containerd[1450]: time="2024-09-04T17:29:09.674585683Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:29:09.698523 containerd[1450]: time="2024-09-04T17:29:09.698473711Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:29:10.270423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102809201.mount: Deactivated successfully. Sep 4 17:29:12.095166 containerd[1450]: time="2024-09-04T17:29:12.095084722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:12.098091 containerd[1450]: time="2024-09-04T17:29:12.098005320Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:29:12.099319 containerd[1450]: time="2024-09-04T17:29:12.099272696Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:12.102187 containerd[1450]: time="2024-09-04T17:29:12.102151746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:12.103626 containerd[1450]: time="2024-09-04T17:29:12.103568502Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.405054306s" Sep 4 17:29:12.103626 containerd[1450]: time="2024-09-04T17:29:12.103618446Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:29:14.290504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:14.304137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:14.324059 systemd[1]: Reloading requested from client PID 2121 ('systemctl') (unit session-9.scope)... Sep 4 17:29:14.324086 systemd[1]: Reloading... Sep 4 17:29:14.403821 zram_generator::config[2161]: No configuration found. Sep 4 17:29:14.666905 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:14.748045 systemd[1]: Reloading finished in 423 ms. Sep 4 17:29:14.815335 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:29:14.815439 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:29:14.815736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:14.817480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:14.974668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:14.980343 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:29:15.022690 kubelet[2206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:29:15.022690 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:29:15.022690 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:29:15.023750 kubelet[2206]: I0904 17:29:15.023699 2206 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:29:15.281867 kubelet[2206]: I0904 17:29:15.281742 2206 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:29:15.281867 kubelet[2206]: I0904 17:29:15.281773 2206 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:29:15.282015 kubelet[2206]: I0904 17:29:15.281986 2206 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:29:15.299845 kubelet[2206]: E0904 17:29:15.299781 2206 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.300711 kubelet[2206]: I0904 17:29:15.300679 2206 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:29:15.315267 kubelet[2206]: I0904 17:29:15.315228 2206 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:29:15.316455 kubelet[2206]: I0904 17:29:15.316427 2206 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:29:15.316637 kubelet[2206]: I0904 17:29:15.316615 2206 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:29:15.316741 kubelet[2206]: I0904 17:29:15.316650 2206 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:29:15.316741 kubelet[2206]: I0904 17:29:15.316661 2206 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:29:15.316813 kubelet[2206]: I0904 17:29:15.316807 2206 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:29:15.316935 kubelet[2206]: I0904 17:29:15.316911 2206 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:29:15.316935 kubelet[2206]: I0904 17:29:15.316929 2206 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:29:15.317002 kubelet[2206]: I0904 17:29:15.316958 2206 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:29:15.317002 kubelet[2206]: I0904 17:29:15.316974 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:29:15.318116 kubelet[2206]: I0904 17:29:15.318093 2206 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:29:15.319299 kubelet[2206]: W0904 17:29:15.319192 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.319299 kubelet[2206]: W0904 17:29:15.319213 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.319299 kubelet[2206]: E0904 17:29:15.319248 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.319299 kubelet[2206]: E0904 17:29:15.319267 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.320992 kubelet[2206]: I0904 17:29:15.320972 2206 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:29:15.321946 kubelet[2206]: W0904 17:29:15.321921 2206 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:29:15.322757 kubelet[2206]: I0904 17:29:15.322551 2206 server.go:1256] "Started kubelet" Sep 4 17:29:15.322757 kubelet[2206]: I0904 17:29:15.322610 2206 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:29:15.323647 kubelet[2206]: I0904 17:29:15.323380 2206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:29:15.323647 kubelet[2206]: I0904 17:29:15.323584 2206 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:29:15.323713 kubelet[2206]: I0904 17:29:15.323699 2206 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:29:15.324499 kubelet[2206]: I0904 17:29:15.324473 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:29:15.325954 kubelet[2206]: E0904 17:29:15.325334 2206 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:29:15.325954 kubelet[2206]: I0904 17:29:15.325375 2206 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:29:15.325954 kubelet[2206]: I0904 17:29:15.325460 2206 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:29:15.325954 kubelet[2206]: I0904 17:29:15.325526 2206 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:29:15.326683 kubelet[2206]: W0904 17:29:15.326172 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.326683 kubelet[2206]: E0904 17:29:15.326585 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.327905 kubelet[2206]: E0904 17:29:15.327858 2206 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:29:15.327905 kubelet[2206]: E0904 17:29:15.327901 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" Sep 4 17:29:15.328070 kubelet[2206]: I0904 17:29:15.328048 2206 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:29:15.328167 kubelet[2206]: I0904 17:29:15.328139 2206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:29:15.329201 kubelet[2206]: I0904 17:29:15.329185 2206 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:29:15.329446 kubelet[2206]: E0904 17:29:15.329413 2206 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21ab12240bb66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:29:15.32252247 +0000 UTC m=+0.337837727,LastTimestamp:2024-09-04 17:29:15.32252247 +0000 UTC m=+0.337837727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:29:15.344913 kubelet[2206]: I0904 17:29:15.344879 2206 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:29:15.344913 kubelet[2206]: I0904 17:29:15.344908 2206 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:29:15.345104 kubelet[2206]: I0904 17:29:15.344926 2206 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:29:15.346135 kubelet[2206]: I0904 17:29:15.346113 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:29:15.347681 kubelet[2206]: I0904 17:29:15.347657 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:29:15.347740 kubelet[2206]: I0904 17:29:15.347694 2206 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:29:15.347740 kubelet[2206]: I0904 17:29:15.347716 2206 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:29:15.347817 kubelet[2206]: E0904 17:29:15.347771 2206 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:29:15.427363 kubelet[2206]: I0904 17:29:15.427317 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:15.427712 kubelet[2206]: E0904 17:29:15.427682 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 4 17:29:15.448867 kubelet[2206]: E0904 17:29:15.448834 2206 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:29:15.528466 kubelet[2206]: E0904 17:29:15.528433 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" Sep 4 17:29:15.629737 kubelet[2206]: I0904 17:29:15.629716 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:15.630038 kubelet[2206]: E0904 17:29:15.630007 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 4 17:29:15.649115 kubelet[2206]: E0904 17:29:15.649098 2206 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:29:15.709999 kubelet[2206]: W0904 17:29:15.709949 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.709999 kubelet[2206]: E0904 17:29:15.709998 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:15.710290 kubelet[2206]: I0904 17:29:15.710258 2206 policy_none.go:49] "None policy: Start" Sep 4 17:29:15.710804 kubelet[2206]: I0904 17:29:15.710776 2206 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:29:15.710868 kubelet[2206]: I0904 17:29:15.710821 2206 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:29:15.717924 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:29:15.731819 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:29:15.744562 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:29:15.745909 kubelet[2206]: I0904 17:29:15.745775 2206 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:29:15.746170 kubelet[2206]: I0904 17:29:15.746145 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:29:15.747162 kubelet[2206]: E0904 17:29:15.747137 2206 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:29:15.929743 kubelet[2206]: E0904 17:29:15.929659 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" Sep 4 17:29:16.032004 kubelet[2206]: I0904 17:29:16.031976 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:16.032335 kubelet[2206]: E0904 17:29:16.032236 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 4 17:29:16.049478 kubelet[2206]: I0904 17:29:16.049440 2206 topology_manager.go:215] "Topology Admit Handler" podUID="9d5eb44a6eefc6f8cd131635a2d48082" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:29:16.050377 kubelet[2206]: I0904 17:29:16.050355 2206 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:29:16.051118 kubelet[2206]: I0904 17:29:16.051067 2206 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:29:16.057114 systemd[1]: Created slice kubepods-burstable-pod9d5eb44a6eefc6f8cd131635a2d48082.slice - libcontainer container kubepods-burstable-pod9d5eb44a6eefc6f8cd131635a2d48082.slice. Sep 4 17:29:16.069445 systemd[1]: Created slice kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice - libcontainer container kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice. Sep 4 17:29:16.083353 systemd[1]: Created slice kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice - libcontainer container kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice. Sep 4 17:29:16.130239 kubelet[2206]: I0904 17:29:16.130195 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:29:16.130239 kubelet[2206]: I0904 17:29:16.130232 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:16.130366 kubelet[2206]: I0904 17:29:16.130250 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:16.130366 kubelet[2206]: I0904 17:29:16.130320 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d5eb44a6eefc6f8cd131635a2d48082-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d5eb44a6eefc6f8cd131635a2d48082\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:16.130366 kubelet[2206]: I0904 17:29:16.130355 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:16.130479 kubelet[2206]: I0904 17:29:16.130405 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:16.130479 kubelet[2206]: I0904 17:29:16.130435 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:16.130479 kubelet[2206]: I0904 17:29:16.130460 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d5eb44a6eefc6f8cd131635a2d48082-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d5eb44a6eefc6f8cd131635a2d48082\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:16.130575 kubelet[2206]: I0904 17:29:16.130484 2206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d5eb44a6eefc6f8cd131635a2d48082-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d5eb44a6eefc6f8cd131635a2d48082\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:16.227111 kubelet[2206]: W0904 17:29:16.226921 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:16.227111 kubelet[2206]: E0904 17:29:16.227008 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:16.267878 kubelet[2206]: W0904 17:29:16.267806 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:16.267878 kubelet[2206]: E0904 17:29:16.267871 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:16.368615 kubelet[2206]: E0904 17:29:16.368567 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:16.369188 containerd[1450]: time="2024-09-04T17:29:16.369142144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d5eb44a6eefc6f8cd131635a2d48082,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:16.381435 kubelet[2206]: E0904 17:29:16.381401 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:16.381923 containerd[1450]: time="2024-09-04T17:29:16.381883962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:16.386185 kubelet[2206]: E0904 17:29:16.386164 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:16.388365 containerd[1450]: time="2024-09-04T17:29:16.388334638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:16.683411 kubelet[2206]: W0904 17:29:16.683325 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:16.683411 kubelet[2206]: E0904 17:29:16.683412 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:16.730852 kubelet[2206]: E0904 17:29:16.730759 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="1.6s" Sep 4 17:29:16.835719 kubelet[2206]: I0904 17:29:16.835288 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:16.835719 kubelet[2206]: E0904 17:29:16.835663 2206 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Sep 4 17:29:16.974222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858629476.mount: Deactivated successfully. Sep 4 17:29:16.989606 containerd[1450]: time="2024-09-04T17:29:16.989481838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:16.997666 containerd[1450]: time="2024-09-04T17:29:16.997555256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:29:16.998842 containerd[1450]: time="2024-09-04T17:29:16.998689361Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:17.002328 containerd[1450]: time="2024-09-04T17:29:17.002211452Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:17.006322 containerd[1450]: time="2024-09-04T17:29:17.005623058Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:17.007852 containerd[1450]: time="2024-09-04T17:29:17.007719689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:29:17.009184 containerd[1450]: time="2024-09-04T17:29:17.009023724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:29:17.012279 containerd[1450]: time="2024-09-04T17:29:17.012107909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:29:17.014238 containerd[1450]: time="2024-09-04T17:29:17.014147891Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.922959ms" Sep 4 17:29:17.015227 containerd[1450]: time="2024-09-04T17:29:17.015044263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.656614ms" Sep 4 17:29:17.021130 containerd[1450]: time="2024-09-04T17:29:17.021051687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.105847ms" Sep 4 17:29:17.166642 kubelet[2206]: W0904 17:29:17.166580 2206 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:17.166642 kubelet[2206]: E0904 17:29:17.166644 2206 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:17.181670 containerd[1450]: time="2024-09-04T17:29:17.181566651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:17.181670 containerd[1450]: time="2024-09-04T17:29:17.181618338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:17.181670 containerd[1450]: time="2024-09-04T17:29:17.181631744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:17.181670 containerd[1450]: time="2024-09-04T17:29:17.181474626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:17.181670 containerd[1450]: time="2024-09-04T17:29:17.181641413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:17.182241 containerd[1450]: time="2024-09-04T17:29:17.181693531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:17.182241 containerd[1450]: time="2024-09-04T17:29:17.182187890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:17.182409 containerd[1450]: time="2024-09-04T17:29:17.182359986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:17.184604 containerd[1450]: time="2024-09-04T17:29:17.184526359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:17.184604 containerd[1450]: time="2024-09-04T17:29:17.184571815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:17.184963 containerd[1450]: time="2024-09-04T17:29:17.184908374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:17.185168 containerd[1450]: time="2024-09-04T17:29:17.185127039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:17.213076 systemd[1]: Started cri-containerd-4c82ef3f34f6736024fdfcaff7bdfbdcdd7ca6accd26cd9d0dc87c475fcc94bc.scope - libcontainer container 4c82ef3f34f6736024fdfcaff7bdfbdcdd7ca6accd26cd9d0dc87c475fcc94bc. Sep 4 17:29:17.215330 systemd[1]: Started cri-containerd-86c5feb0c33cb2e428ccdcaed241027bbb06bd904d3f645fe377b8c6dcab0904.scope - libcontainer container 86c5feb0c33cb2e428ccdcaed241027bbb06bd904d3f645fe377b8c6dcab0904. Sep 4 17:29:17.218480 systemd[1]: Started cri-containerd-e372aab48b6891195e43c06552fdf25df5cdc752e287b71fd64c274d717b79fc.scope - libcontainer container e372aab48b6891195e43c06552fdf25df5cdc752e287b71fd64c274d717b79fc. Sep 4 17:29:17.262388 containerd[1450]: time="2024-09-04T17:29:17.262209067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c82ef3f34f6736024fdfcaff7bdfbdcdd7ca6accd26cd9d0dc87c475fcc94bc\"" Sep 4 17:29:17.263845 kubelet[2206]: E0904 17:29:17.263812 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:17.266668 containerd[1450]: time="2024-09-04T17:29:17.266131514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d5eb44a6eefc6f8cd131635a2d48082,Namespace:kube-system,Attempt:0,} returns sandbox id \"86c5feb0c33cb2e428ccdcaed241027bbb06bd904d3f645fe377b8c6dcab0904\"" Sep 4 17:29:17.268000 kubelet[2206]: E0904 17:29:17.267976 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:17.270175 containerd[1450]: time="2024-09-04T17:29:17.270149161Z" level=info msg="CreateContainer within sandbox \"4c82ef3f34f6736024fdfcaff7bdfbdcdd7ca6accd26cd9d0dc87c475fcc94bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:29:17.271551 containerd[1450]: time="2024-09-04T17:29:17.270634523Z" level=info msg="CreateContainer within sandbox \"86c5feb0c33cb2e428ccdcaed241027bbb06bd904d3f645fe377b8c6dcab0904\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:29:17.272577 containerd[1450]: time="2024-09-04T17:29:17.272526604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e372aab48b6891195e43c06552fdf25df5cdc752e287b71fd64c274d717b79fc\"" Sep 4 17:29:17.273119 kubelet[2206]: E0904 17:29:17.273095 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:17.274691 containerd[1450]: time="2024-09-04T17:29:17.274661467Z" level=info msg="CreateContainer within sandbox \"e372aab48b6891195e43c06552fdf25df5cdc752e287b71fd64c274d717b79fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:29:17.294354 containerd[1450]: time="2024-09-04T17:29:17.294291532Z" level=info msg="CreateContainer within sandbox \"4c82ef3f34f6736024fdfcaff7bdfbdcdd7ca6accd26cd9d0dc87c475fcc94bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63487e77ffd36095475336661eb5a1d87ec6bb3cc41fa7a6d21ce0ac43c9c29c\"" Sep 4 17:29:17.295162 containerd[1450]: time="2024-09-04T17:29:17.295109494Z" level=info msg="StartContainer for \"63487e77ffd36095475336661eb5a1d87ec6bb3cc41fa7a6d21ce0ac43c9c29c\"" Sep 4 17:29:17.301167 kubelet[2206]: E0904 17:29:17.301118 2206 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.44:6443: connect: connection refused Sep 4 17:29:17.302399 containerd[1450]: time="2024-09-04T17:29:17.302220904Z" level=info msg="CreateContainer within sandbox \"e372aab48b6891195e43c06552fdf25df5cdc752e287b71fd64c274d717b79fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e591a5aa1e19abdec1350a4c0a5e054e125d0df71e03a01b8f8eeae269eb127e\"" Sep 4 17:29:17.302769 containerd[1450]: time="2024-09-04T17:29:17.302737244Z" level=info msg="StartContainer for \"e591a5aa1e19abdec1350a4c0a5e054e125d0df71e03a01b8f8eeae269eb127e\"" Sep 4 17:29:17.303957 containerd[1450]: time="2024-09-04T17:29:17.303921823Z" level=info msg="CreateContainer within sandbox \"86c5feb0c33cb2e428ccdcaed241027bbb06bd904d3f645fe377b8c6dcab0904\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ab0a90ef007a655e3f2216026b06849c451c3557cec26c5dda5aaf771358755f\"" Sep 4 17:29:17.304301 containerd[1450]: time="2024-09-04T17:29:17.304269222Z" level=info msg="StartContainer for \"ab0a90ef007a655e3f2216026b06849c451c3557cec26c5dda5aaf771358755f\"" Sep 4 17:29:17.336041 systemd[1]: Started cri-containerd-63487e77ffd36095475336661eb5a1d87ec6bb3cc41fa7a6d21ce0ac43c9c29c.scope - libcontainer container 63487e77ffd36095475336661eb5a1d87ec6bb3cc41fa7a6d21ce0ac43c9c29c. Sep 4 17:29:17.341544 systemd[1]: Started cri-containerd-ab0a90ef007a655e3f2216026b06849c451c3557cec26c5dda5aaf771358755f.scope - libcontainer container ab0a90ef007a655e3f2216026b06849c451c3557cec26c5dda5aaf771358755f. Sep 4 17:29:17.343941 systemd[1]: Started cri-containerd-e591a5aa1e19abdec1350a4c0a5e054e125d0df71e03a01b8f8eeae269eb127e.scope - libcontainer container e591a5aa1e19abdec1350a4c0a5e054e125d0df71e03a01b8f8eeae269eb127e. Sep 4 17:29:17.433817 containerd[1450]: time="2024-09-04T17:29:17.433732910Z" level=info msg="StartContainer for \"ab0a90ef007a655e3f2216026b06849c451c3557cec26c5dda5aaf771358755f\" returns successfully" Sep 4 17:29:17.434307 containerd[1450]: time="2024-09-04T17:29:17.433818403Z" level=info msg="StartContainer for \"63487e77ffd36095475336661eb5a1d87ec6bb3cc41fa7a6d21ce0ac43c9c29c\" returns successfully" Sep 4 17:29:17.434934 containerd[1450]: time="2024-09-04T17:29:17.433907532Z" level=info msg="StartContainer for \"e591a5aa1e19abdec1350a4c0a5e054e125d0df71e03a01b8f8eeae269eb127e\" returns successfully" Sep 4 17:29:18.363882 kubelet[2206]: E0904 17:29:18.363835 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:18.369857 kubelet[2206]: E0904 17:29:18.368620 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:18.372076 kubelet[2206]: E0904 17:29:18.372048 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:18.437196 kubelet[2206]: I0904 17:29:18.437144 2206 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:18.937558 kubelet[2206]: E0904 17:29:18.937497 2206 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:29:19.031323 kubelet[2206]: I0904 17:29:19.030947 2206 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:29:19.319518 kubelet[2206]: I0904 17:29:19.319335 2206 apiserver.go:52] "Watching apiserver" Sep 4 17:29:19.325766 kubelet[2206]: I0904 17:29:19.325721 2206 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:29:19.376854 kubelet[2206]: E0904 17:29:19.376817 2206 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 17:29:19.376854 kubelet[2206]: E0904 17:29:19.376843 2206 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:19.377314 kubelet[2206]: E0904 17:29:19.376846 2206 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:19.377314 kubelet[2206]: E0904 17:29:19.377108 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:19.377314 kubelet[2206]: E0904 17:29:19.377264 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:19.377314 kubelet[2206]: E0904 17:29:19.377286 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:20.378991 kubelet[2206]: E0904 17:29:20.378959 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:20.378991 kubelet[2206]: E0904 17:29:20.379004 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:21.349596 kubelet[2206]: E0904 17:29:21.349526 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:21.375780 kubelet[2206]: E0904 17:29:21.375744 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:21.376442 kubelet[2206]: E0904 17:29:21.376415 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:21.376687 kubelet[2206]: E0904 17:29:21.376640 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:21.571884 systemd[1]: Reloading requested from client PID 2488 ('systemctl') (unit session-9.scope)... Sep 4 17:29:21.571905 systemd[1]: Reloading... Sep 4 17:29:21.650847 zram_generator::config[2525]: No configuration found. Sep 4 17:29:21.767292 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:29:21.859625 systemd[1]: Reloading finished in 287 ms. Sep 4 17:29:21.903145 kubelet[2206]: I0904 17:29:21.902995 2206 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:29:21.903105 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:21.922446 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:29:21.922816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:21.932152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:29:22.081584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:29:22.086186 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:29:22.139817 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:29:22.139817 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:29:22.139817 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:29:22.140174 kubelet[2570]: I0904 17:29:22.139864 2570 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:29:22.145075 kubelet[2570]: I0904 17:29:22.145040 2570 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:29:22.145075 kubelet[2570]: I0904 17:29:22.145064 2570 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:29:22.145235 kubelet[2570]: I0904 17:29:22.145220 2570 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:29:22.146586 kubelet[2570]: I0904 17:29:22.146556 2570 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:29:22.148392 kubelet[2570]: I0904 17:29:22.148277 2570 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:29:22.151857 sudo[2585]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:29:22.152294 sudo[2585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 17:29:22.158388 kubelet[2570]: I0904 17:29:22.158298 2570 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:29:22.159055 kubelet[2570]: I0904 17:29:22.158619 2570 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:29:22.159055 kubelet[2570]: I0904 17:29:22.158850 2570 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:29:22.159055 kubelet[2570]: I0904 17:29:22.158887 2570 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:29:22.159055 kubelet[2570]: I0904 17:29:22.158938 2570 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:29:22.159055 kubelet[2570]: I0904 17:29:22.158992 2570 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:29:22.159387 kubelet[2570]: I0904 17:29:22.159146 2570 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:29:22.159387 kubelet[2570]: I0904 17:29:22.159162 2570 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:29:22.160520 kubelet[2570]: I0904 17:29:22.159823 2570 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:29:22.160520 kubelet[2570]: I0904 17:29:22.159849 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:29:22.161007 kubelet[2570]: I0904 17:29:22.160971 2570 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:29:22.161164 kubelet[2570]: I0904 17:29:22.161144 2570 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:29:22.161691 kubelet[2570]: I0904 17:29:22.161500 2570 server.go:1256] "Started kubelet" Sep 4 17:29:22.163812 kubelet[2570]: I0904 17:29:22.163165 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:29:22.166628 kubelet[2570]: I0904 17:29:22.166590 2570 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:29:22.167587 kubelet[2570]: I0904 17:29:22.167552 2570 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:29:22.168799 kubelet[2570]: I0904 17:29:22.168754 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:29:22.169006 kubelet[2570]: I0904 17:29:22.168972 2570 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:29:22.173665 kubelet[2570]: I0904 17:29:22.173635 2570 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:29:22.173765 kubelet[2570]: I0904 17:29:22.173715 2570 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:29:22.174267 kubelet[2570]: E0904 17:29:22.174147 2570 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:29:22.174988 kubelet[2570]: I0904 17:29:22.174947 2570 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:29:22.175158 kubelet[2570]: I0904 17:29:22.175133 2570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:29:22.176227 kubelet[2570]: I0904 17:29:22.176197 2570 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:29:22.177207 kubelet[2570]: I0904 17:29:22.177160 2570 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:29:22.184028 kubelet[2570]: I0904 17:29:22.183972 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:29:22.188708 kubelet[2570]: I0904 17:29:22.187938 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:29:22.188708 kubelet[2570]: I0904 17:29:22.187982 2570 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:29:22.188708 kubelet[2570]: I0904 17:29:22.188001 2570 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:29:22.188708 kubelet[2570]: E0904 17:29:22.188052 2570 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:29:22.217193 kubelet[2570]: I0904 17:29:22.217163 2570 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:29:22.217193 kubelet[2570]: I0904 17:29:22.217184 2570 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:29:22.217193 kubelet[2570]: I0904 17:29:22.217203 2570 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:29:22.217349 kubelet[2570]: I0904 17:29:22.217338 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:29:22.217372 kubelet[2570]: I0904 17:29:22.217358 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:29:22.217372 kubelet[2570]: I0904 17:29:22.217365 2570 policy_none.go:49] "None policy: Start" Sep 4 17:29:22.218064 kubelet[2570]: I0904 17:29:22.218025 2570 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:29:22.218064 kubelet[2570]: I0904 17:29:22.218061 2570 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:29:22.218256 kubelet[2570]: I0904 17:29:22.218240 2570 state_mem.go:75] "Updated machine memory state" Sep 4 17:29:22.222830 kubelet[2570]: I0904 17:29:22.222801 2570 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:29:22.223138 kubelet[2570]: I0904 17:29:22.223030 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:29:22.280044 kubelet[2570]: I0904 17:29:22.279986 2570 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:29:22.287749 kubelet[2570]: I0904 17:29:22.287692 2570 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:29:22.287922 kubelet[2570]: I0904 17:29:22.287821 2570 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:29:22.288505 kubelet[2570]: I0904 17:29:22.288459 2570 topology_manager.go:215] "Topology Admit Handler" podUID="9d5eb44a6eefc6f8cd131635a2d48082" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:29:22.288553 kubelet[2570]: I0904 17:29:22.288535 2570 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:29:22.288617 kubelet[2570]: I0904 17:29:22.288576 2570 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:29:22.294768 kubelet[2570]: E0904 17:29:22.294700 2570 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 17:29:22.295765 kubelet[2570]: E0904 17:29:22.295608 2570 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:22.295765 kubelet[2570]: E0904 17:29:22.295659 2570 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:22.377694 kubelet[2570]: I0904 17:29:22.377577 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:22.377694 kubelet[2570]: I0904 17:29:22.377637 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d5eb44a6eefc6f8cd131635a2d48082-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d5eb44a6eefc6f8cd131635a2d48082\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:22.377694 kubelet[2570]: I0904 17:29:22.377662 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d5eb44a6eefc6f8cd131635a2d48082-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d5eb44a6eefc6f8cd131635a2d48082\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:22.377921 kubelet[2570]: I0904 17:29:22.377760 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d5eb44a6eefc6f8cd131635a2d48082-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d5eb44a6eefc6f8cd131635a2d48082\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:22.377921 kubelet[2570]: I0904 17:29:22.377892 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:22.377993 kubelet[2570]: I0904 17:29:22.377952 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:22.377993 kubelet[2570]: I0904 17:29:22.377984 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:22.378058 kubelet[2570]: I0904 17:29:22.378013 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:29:22.378058 kubelet[2570]: I0904 17:29:22.378035 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:29:22.596330 kubelet[2570]: E0904 17:29:22.595877 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:22.596450 kubelet[2570]: E0904 17:29:22.596372 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:22.598257 kubelet[2570]: E0904 17:29:22.598186 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:22.645446 sudo[2585]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:23.160983 kubelet[2570]: I0904 17:29:23.160917 2570 apiserver.go:52] "Watching apiserver" Sep 4 17:29:23.174288 kubelet[2570]: I0904 17:29:23.174223 2570 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:29:23.200241 kubelet[2570]: E0904 17:29:23.200209 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:23.200457 kubelet[2570]: E0904 17:29:23.200396 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:23.208149 kubelet[2570]: E0904 17:29:23.206979 2570 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:29:23.208149 kubelet[2570]: E0904 17:29:23.207556 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:23.223355 kubelet[2570]: I0904 17:29:23.223309 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.223216036 podStartE2EDuration="3.223216036s" podCreationTimestamp="2024-09-04 17:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:23.223150412 +0000 UTC m=+1.131912628" watchObservedRunningTime="2024-09-04 17:29:23.223216036 +0000 UTC m=+1.131978252" Sep 4 17:29:23.223580 kubelet[2570]: I0904 17:29:23.223432 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.223414581 podStartE2EDuration="2.223414581s" podCreationTimestamp="2024-09-04 17:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:23.217822387 +0000 UTC m=+1.126584603" watchObservedRunningTime="2024-09-04 17:29:23.223414581 +0000 UTC m=+1.132176797" Sep 4 17:29:23.228446 kubelet[2570]: I0904 17:29:23.228404 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.228375833 podStartE2EDuration="3.228375833s" podCreationTimestamp="2024-09-04 17:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:23.22823189 +0000 UTC m=+1.136994106" watchObservedRunningTime="2024-09-04 17:29:23.228375833 +0000 UTC m=+1.137138049" Sep 4 17:29:23.930672 sudo[1640]: pam_unix(sudo:session): session closed for user root Sep 4 17:29:23.932920 sshd[1637]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:23.936044 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:38308.service: Deactivated successfully. Sep 4 17:29:23.938058 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:29:23.938252 systemd[1]: session-9.scope: Consumed 4.612s CPU time, 143.4M memory peak, 0B memory swap peak. Sep 4 17:29:23.939916 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:29:23.940880 systemd-logind[1425]: Removed session 9. Sep 4 17:29:24.202155 kubelet[2570]: E0904 17:29:24.202023 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:27.389994 kubelet[2570]: E0904 17:29:27.389905 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:28.207257 kubelet[2570]: E0904 17:29:28.207218 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:28.343505 update_engine[1426]: I0904 17:29:28.343405 1426 update_attempter.cc:509] Updating boot flags... Sep 4 17:29:28.377856 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2656) Sep 4 17:29:28.411853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2660) Sep 4 17:29:28.445814 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2660) Sep 4 17:29:30.538878 kubelet[2570]: E0904 17:29:30.538832 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:31.161193 kubelet[2570]: E0904 17:29:31.161155 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:31.212339 kubelet[2570]: E0904 17:29:31.212301 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:31.212339 kubelet[2570]: E0904 17:29:31.212301 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:35.907848 kubelet[2570]: I0904 17:29:35.907775 2570 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:29:35.908422 kubelet[2570]: I0904 17:29:35.908382 2570 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:29:35.908455 containerd[1450]: time="2024-09-04T17:29:35.908135246Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:29:36.905837 kubelet[2570]: I0904 17:29:36.903929 2570 topology_manager.go:215] "Topology Admit Handler" podUID="8de40268-c7f3-4673-b81f-4b6d46af892e" podNamespace="kube-system" podName="kube-proxy-4nkfl" Sep 4 17:29:36.907888 kubelet[2570]: I0904 17:29:36.907844 2570 topology_manager.go:215] "Topology Admit Handler" podUID="81a99668-d619-41c4-b917-6185f65d0a91" podNamespace="kube-system" podName="cilium-fv6mk" Sep 4 17:29:36.917652 systemd[1]: Created slice kubepods-besteffort-pod8de40268_c7f3_4673_b81f_4b6d46af892e.slice - libcontainer container kubepods-besteffort-pod8de40268_c7f3_4673_b81f_4b6d46af892e.slice. Sep 4 17:29:36.931618 systemd[1]: Created slice kubepods-burstable-pod81a99668_d619_41c4_b917_6185f65d0a91.slice - libcontainer container kubepods-burstable-pod81a99668_d619_41c4_b917_6185f65d0a91.slice. Sep 4 17:29:37.048268 kubelet[2570]: I0904 17:29:37.048206 2570 topology_manager.go:215] "Topology Admit Handler" podUID="302600ec-ac4e-4a62-96ba-076e0959b549" podNamespace="kube-system" podName="cilium-operator-5cc964979-6k9cv" Sep 4 17:29:37.059158 kubelet[2570]: I0904 17:29:37.058210 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-hostproc\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059158 kubelet[2570]: I0904 17:29:37.058271 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a99668-d619-41c4-b917-6185f65d0a91-hubble-tls\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059158 kubelet[2570]: I0904 17:29:37.058312 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8de40268-c7f3-4673-b81f-4b6d46af892e-kube-proxy\") pod \"kube-proxy-4nkfl\" (UID: \"8de40268-c7f3-4673-b81f-4b6d46af892e\") " pod="kube-system/kube-proxy-4nkfl" Sep 4 17:29:37.059158 kubelet[2570]: I0904 17:29:37.058335 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-bpf-maps\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059158 kubelet[2570]: I0904 17:29:37.058356 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cilium-cgroup\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059158 kubelet[2570]: I0904 17:29:37.058375 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-host-proc-sys-kernel\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059469 kubelet[2570]: I0904 17:29:37.058393 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8de40268-c7f3-4673-b81f-4b6d46af892e-lib-modules\") pod \"kube-proxy-4nkfl\" (UID: \"8de40268-c7f3-4673-b81f-4b6d46af892e\") " pod="kube-system/kube-proxy-4nkfl" Sep 4 17:29:37.059469 kubelet[2570]: I0904 17:29:37.058410 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-728l7\" (UniqueName: \"kubernetes.io/projected/8de40268-c7f3-4673-b81f-4b6d46af892e-kube-api-access-728l7\") pod \"kube-proxy-4nkfl\" (UID: \"8de40268-c7f3-4673-b81f-4b6d46af892e\") " pod="kube-system/kube-proxy-4nkfl" Sep 4 17:29:37.059469 kubelet[2570]: I0904 17:29:37.058430 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a99668-d619-41c4-b917-6185f65d0a91-cilium-config-path\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059469 kubelet[2570]: I0904 17:29:37.058446 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-etc-cni-netd\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059469 kubelet[2570]: I0904 17:29:37.058463 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-lib-modules\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059267 systemd[1]: Created slice kubepods-besteffort-pod302600ec_ac4e_4a62_96ba_076e0959b549.slice - libcontainer container kubepods-besteffort-pod302600ec_ac4e_4a62_96ba_076e0959b549.slice. Sep 4 17:29:37.059723 kubelet[2570]: I0904 17:29:37.058486 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-xtables-lock\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059723 kubelet[2570]: I0904 17:29:37.058503 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cni-path\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059723 kubelet[2570]: I0904 17:29:37.058522 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a99668-d619-41c4-b917-6185f65d0a91-clustermesh-secrets\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059723 kubelet[2570]: I0904 17:29:37.058540 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8de40268-c7f3-4673-b81f-4b6d46af892e-xtables-lock\") pod \"kube-proxy-4nkfl\" (UID: \"8de40268-c7f3-4673-b81f-4b6d46af892e\") " pod="kube-system/kube-proxy-4nkfl" Sep 4 17:29:37.059723 kubelet[2570]: I0904 17:29:37.058560 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cilium-run\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059723 kubelet[2570]: I0904 17:29:37.058580 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-host-proc-sys-net\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.059958 kubelet[2570]: I0904 17:29:37.058602 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krjml\" (UniqueName: \"kubernetes.io/projected/81a99668-d619-41c4-b917-6185f65d0a91-kube-api-access-krjml\") pod \"cilium-fv6mk\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " pod="kube-system/cilium-fv6mk" Sep 4 17:29:37.160937 kubelet[2570]: I0904 17:29:37.159775 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/302600ec-ac4e-4a62-96ba-076e0959b549-cilium-config-path\") pod \"cilium-operator-5cc964979-6k9cv\" (UID: \"302600ec-ac4e-4a62-96ba-076e0959b549\") " pod="kube-system/cilium-operator-5cc964979-6k9cv" Sep 4 17:29:37.160937 kubelet[2570]: I0904 17:29:37.160005 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgq9\" (UniqueName: \"kubernetes.io/projected/302600ec-ac4e-4a62-96ba-076e0959b549-kube-api-access-gvgq9\") pod \"cilium-operator-5cc964979-6k9cv\" (UID: \"302600ec-ac4e-4a62-96ba-076e0959b549\") " pod="kube-system/cilium-operator-5cc964979-6k9cv" Sep 4 17:29:37.228251 kubelet[2570]: E0904 17:29:37.228215 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:37.228983 containerd[1450]: time="2024-09-04T17:29:37.228695562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4nkfl,Uid:8de40268-c7f3-4673-b81f-4b6d46af892e,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:37.234298 kubelet[2570]: E0904 17:29:37.234266 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:37.234775 containerd[1450]: time="2024-09-04T17:29:37.234720108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fv6mk,Uid:81a99668-d619-41c4-b917-6185f65d0a91,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:37.261076 containerd[1450]: time="2024-09-04T17:29:37.260813658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:37.261076 containerd[1450]: time="2024-09-04T17:29:37.260992634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:37.262146 containerd[1450]: time="2024-09-04T17:29:37.261071934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:37.263198 containerd[1450]: time="2024-09-04T17:29:37.263062670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:37.263248 containerd[1450]: time="2024-09-04T17:29:37.262937234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:37.265187 containerd[1450]: time="2024-09-04T17:29:37.263841294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:37.265187 containerd[1450]: time="2024-09-04T17:29:37.263875830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:37.265187 containerd[1450]: time="2024-09-04T17:29:37.263954508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:37.291932 systemd[1]: Started cri-containerd-3f54d3ec0261999c61872d7a11d2c7651758b2ef20ddbf31b6f2896e2d5417b1.scope - libcontainer container 3f54d3ec0261999c61872d7a11d2c7651758b2ef20ddbf31b6f2896e2d5417b1. Sep 4 17:29:37.295263 systemd[1]: Started cri-containerd-2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672.scope - libcontainer container 2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672. Sep 4 17:29:37.318348 containerd[1450]: time="2024-09-04T17:29:37.318282415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4nkfl,Uid:8de40268-c7f3-4673-b81f-4b6d46af892e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f54d3ec0261999c61872d7a11d2c7651758b2ef20ddbf31b6f2896e2d5417b1\"" Sep 4 17:29:37.320310 kubelet[2570]: E0904 17:29:37.320286 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:37.320935 containerd[1450]: time="2024-09-04T17:29:37.320879102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fv6mk,Uid:81a99668-d619-41c4-b917-6185f65d0a91,Namespace:kube-system,Attempt:0,} returns sandbox id \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\"" Sep 4 17:29:37.321959 kubelet[2570]: E0904 17:29:37.321936 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:37.323014 containerd[1450]: time="2024-09-04T17:29:37.322981798Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:29:37.323344 containerd[1450]: time="2024-09-04T17:29:37.323314414Z" level=info msg="CreateContainer within sandbox \"3f54d3ec0261999c61872d7a11d2c7651758b2ef20ddbf31b6f2896e2d5417b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:29:37.343364 containerd[1450]: time="2024-09-04T17:29:37.343318676Z" level=info msg="CreateContainer within sandbox \"3f54d3ec0261999c61872d7a11d2c7651758b2ef20ddbf31b6f2896e2d5417b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b710781618b0c938941efb02246220fb97eff461adcb9df860eaf5f1d2fb6a0\"" Sep 4 17:29:37.343724 containerd[1450]: time="2024-09-04T17:29:37.343674215Z" level=info msg="StartContainer for \"0b710781618b0c938941efb02246220fb97eff461adcb9df860eaf5f1d2fb6a0\"" Sep 4 17:29:37.365547 kubelet[2570]: E0904 17:29:37.364676 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:37.365688 containerd[1450]: time="2024-09-04T17:29:37.365014842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-6k9cv,Uid:302600ec-ac4e-4a62-96ba-076e0959b549,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:37.373945 systemd[1]: Started cri-containerd-0b710781618b0c938941efb02246220fb97eff461adcb9df860eaf5f1d2fb6a0.scope - libcontainer container 0b710781618b0c938941efb02246220fb97eff461adcb9df860eaf5f1d2fb6a0. Sep 4 17:29:37.394461 containerd[1450]: time="2024-09-04T17:29:37.394345332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:37.394461 containerd[1450]: time="2024-09-04T17:29:37.394406347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:37.394635 containerd[1450]: time="2024-09-04T17:29:37.394448185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:37.394685 containerd[1450]: time="2024-09-04T17:29:37.394575716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:37.410818 containerd[1450]: time="2024-09-04T17:29:37.410736304Z" level=info msg="StartContainer for \"0b710781618b0c938941efb02246220fb97eff461adcb9df860eaf5f1d2fb6a0\" returns successfully" Sep 4 17:29:37.424185 systemd[1]: Started cri-containerd-d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5.scope - libcontainer container d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5. Sep 4 17:29:37.469738 containerd[1450]: time="2024-09-04T17:29:37.469673145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-6k9cv,Uid:302600ec-ac4e-4a62-96ba-076e0959b549,Namespace:kube-system,Attempt:0,} returns sandbox id \"d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5\"" Sep 4 17:29:37.470866 kubelet[2570]: E0904 17:29:37.470673 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:38.226468 kubelet[2570]: E0904 17:29:38.226435 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:43.848873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583503457.mount: Deactivated successfully. Sep 4 17:29:46.563919 containerd[1450]: time="2024-09-04T17:29:46.563857385Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:46.564769 containerd[1450]: time="2024-09-04T17:29:46.564735074Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735335" Sep 4 17:29:46.566226 containerd[1450]: time="2024-09-04T17:29:46.566176542Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:46.567818 containerd[1450]: time="2024-09-04T17:29:46.567772491Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.244748053s" Sep 4 17:29:46.567861 containerd[1450]: time="2024-09-04T17:29:46.567823316Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 17:29:46.569067 containerd[1450]: time="2024-09-04T17:29:46.569033911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:29:46.573899 containerd[1450]: time="2024-09-04T17:29:46.573852455Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:29:46.589075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932710395.mount: Deactivated successfully. Sep 4 17:29:46.593111 containerd[1450]: time="2024-09-04T17:29:46.593051188Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\"" Sep 4 17:29:46.593671 containerd[1450]: time="2024-09-04T17:29:46.593620467Z" level=info msg="StartContainer for \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\"" Sep 4 17:29:46.638036 systemd[1]: Started cri-containerd-d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1.scope - libcontainer container d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1. Sep 4 17:29:46.666761 containerd[1450]: time="2024-09-04T17:29:46.666703442Z" level=info msg="StartContainer for \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\" returns successfully" Sep 4 17:29:46.676842 systemd[1]: cri-containerd-d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1.scope: Deactivated successfully. Sep 4 17:29:47.242121 kubelet[2570]: E0904 17:29:47.242065 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:47.396511 kubelet[2570]: I0904 17:29:47.396436 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4nkfl" podStartSLOduration=11.396394817000001 podStartE2EDuration="11.396394817s" podCreationTimestamp="2024-09-04 17:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:38.233846323 +0000 UTC m=+16.142608539" watchObservedRunningTime="2024-09-04 17:29:47.396394817 +0000 UTC m=+25.305157033" Sep 4 17:29:47.473526 containerd[1450]: time="2024-09-04T17:29:47.471179624Z" level=info msg="shim disconnected" id=d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1 namespace=k8s.io Sep 4 17:29:47.473526 containerd[1450]: time="2024-09-04T17:29:47.473509381Z" level=warning msg="cleaning up after shim disconnected" id=d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1 namespace=k8s.io Sep 4 17:29:47.473526 containerd[1450]: time="2024-09-04T17:29:47.473520061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:47.586221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1-rootfs.mount: Deactivated successfully. Sep 4 17:29:48.244375 kubelet[2570]: E0904 17:29:48.244343 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:48.246023 containerd[1450]: time="2024-09-04T17:29:48.245966665Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:29:48.263371 containerd[1450]: time="2024-09-04T17:29:48.263321555Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\"" Sep 4 17:29:48.263861 containerd[1450]: time="2024-09-04T17:29:48.263825803Z" level=info msg="StartContainer for \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\"" Sep 4 17:29:48.295918 systemd[1]: Started cri-containerd-dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33.scope - libcontainer container dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33. Sep 4 17:29:48.318865 containerd[1450]: time="2024-09-04T17:29:48.318820598Z" level=info msg="StartContainer for \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\" returns successfully" Sep 4 17:29:48.331520 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:29:48.331761 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:48.331851 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:48.338321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:29:48.338805 systemd[1]: cri-containerd-dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33.scope: Deactivated successfully. Sep 4 17:29:48.374298 containerd[1450]: time="2024-09-04T17:29:48.374240132Z" level=info msg="shim disconnected" id=dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33 namespace=k8s.io Sep 4 17:29:48.374298 containerd[1450]: time="2024-09-04T17:29:48.374293382Z" level=warning msg="cleaning up after shim disconnected" id=dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33 namespace=k8s.io Sep 4 17:29:48.374298 containerd[1450]: time="2024-09-04T17:29:48.374301858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:48.378232 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:29:48.585988 systemd[1]: run-containerd-runc-k8s.io-dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33-runc.aA7OWy.mount: Deactivated successfully. Sep 4 17:29:48.586122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33-rootfs.mount: Deactivated successfully. Sep 4 17:29:48.591619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862332050.mount: Deactivated successfully. Sep 4 17:29:49.247974 kubelet[2570]: E0904 17:29:49.247937 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:49.250103 containerd[1450]: time="2024-09-04T17:29:49.250043812Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:29:49.251109 containerd[1450]: time="2024-09-04T17:29:49.251067455Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:49.253574 containerd[1450]: time="2024-09-04T17:29:49.253533045Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907281" Sep 4 17:29:49.256068 containerd[1450]: time="2024-09-04T17:29:49.255992144Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:49.258688 containerd[1450]: time="2024-09-04T17:29:49.258638685Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.689576271s" Sep 4 17:29:49.258740 containerd[1450]: time="2024-09-04T17:29:49.258688679Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 17:29:49.261350 containerd[1450]: time="2024-09-04T17:29:49.261296538Z" level=info msg="CreateContainer within sandbox \"d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:29:49.274924 containerd[1450]: time="2024-09-04T17:29:49.274868800Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\"" Sep 4 17:29:49.275500 containerd[1450]: time="2024-09-04T17:29:49.275465100Z" level=info msg="StartContainer for \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\"" Sep 4 17:29:49.278261 containerd[1450]: time="2024-09-04T17:29:49.278224002Z" level=info msg="CreateContainer within sandbox \"d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\"" Sep 4 17:29:49.278678 containerd[1450]: time="2024-09-04T17:29:49.278654380Z" level=info msg="StartContainer for \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\"" Sep 4 17:29:49.311983 systemd[1]: Started cri-containerd-c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede.scope - libcontainer container c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede. Sep 4 17:29:49.315297 systemd[1]: Started cri-containerd-4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62.scope - libcontainer container 4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62. Sep 4 17:29:49.350196 systemd[1]: cri-containerd-4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62.scope: Deactivated successfully. Sep 4 17:29:49.457206 containerd[1450]: time="2024-09-04T17:29:49.457139103Z" level=info msg="StartContainer for \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\" returns successfully" Sep 4 17:29:49.457457 containerd[1450]: time="2024-09-04T17:29:49.457157027Z" level=info msg="StartContainer for \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\" returns successfully" Sep 4 17:29:49.493018 containerd[1450]: time="2024-09-04T17:29:49.492650683Z" level=info msg="shim disconnected" id=4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62 namespace=k8s.io Sep 4 17:29:49.493018 containerd[1450]: time="2024-09-04T17:29:49.492710055Z" level=warning msg="cleaning up after shim disconnected" id=4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62 namespace=k8s.io Sep 4 17:29:49.493018 containerd[1450]: time="2024-09-04T17:29:49.492718451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:49.722171 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:46604.service - OpenSSH per-connection server daemon (10.0.0.1:46604). Sep 4 17:29:49.786635 sshd[3218]: Accepted publickey for core from 10.0.0.1 port 46604 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:29:49.788688 sshd[3218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:29:49.794976 systemd-logind[1425]: New session 10 of user core. Sep 4 17:29:49.800952 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:29:50.044156 sshd[3218]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:50.048954 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:46604.service: Deactivated successfully. Sep 4 17:29:50.051555 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:29:50.052367 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:29:50.053558 systemd-logind[1425]: Removed session 10. Sep 4 17:29:50.251184 kubelet[2570]: E0904 17:29:50.251149 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:50.253188 kubelet[2570]: E0904 17:29:50.253145 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:50.256067 containerd[1450]: time="2024-09-04T17:29:50.256002559Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:29:50.274340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794430181.mount: Deactivated successfully. Sep 4 17:29:50.274641 kubelet[2570]: I0904 17:29:50.274590 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-6k9cv" podStartSLOduration=1.486852441 podStartE2EDuration="13.274539764s" podCreationTimestamp="2024-09-04 17:29:37 +0000 UTC" firstStartedPulling="2024-09-04 17:29:37.471662007 +0000 UTC m=+15.380424223" lastFinishedPulling="2024-09-04 17:29:49.25934932 +0000 UTC m=+27.168111546" observedRunningTime="2024-09-04 17:29:50.259086361 +0000 UTC m=+28.167848578" watchObservedRunningTime="2024-09-04 17:29:50.274539764 +0000 UTC m=+28.183302000" Sep 4 17:29:50.275335 containerd[1450]: time="2024-09-04T17:29:50.275070731Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\"" Sep 4 17:29:50.275926 containerd[1450]: time="2024-09-04T17:29:50.275678222Z" level=info msg="StartContainer for \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\"" Sep 4 17:29:50.312049 systemd[1]: Started cri-containerd-08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48.scope - libcontainer container 08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48. Sep 4 17:29:50.338597 systemd[1]: cri-containerd-08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48.scope: Deactivated successfully. Sep 4 17:29:50.341334 containerd[1450]: time="2024-09-04T17:29:50.341283665Z" level=info msg="StartContainer for \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\" returns successfully" Sep 4 17:29:50.371892 containerd[1450]: time="2024-09-04T17:29:50.371813402Z" level=info msg="shim disconnected" id=08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48 namespace=k8s.io Sep 4 17:29:50.371892 containerd[1450]: time="2024-09-04T17:29:50.371863577Z" level=warning msg="cleaning up after shim disconnected" id=08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48 namespace=k8s.io Sep 4 17:29:50.371892 containerd[1450]: time="2024-09-04T17:29:50.371871763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:50.586211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48-rootfs.mount: Deactivated successfully. Sep 4 17:29:51.256860 kubelet[2570]: E0904 17:29:51.256822 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:51.257385 kubelet[2570]: E0904 17:29:51.256900 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:51.259330 containerd[1450]: time="2024-09-04T17:29:51.259283487Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:29:51.276611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount54773051.mount: Deactivated successfully. Sep 4 17:29:51.277668 containerd[1450]: time="2024-09-04T17:29:51.277628398Z" level=info msg="CreateContainer within sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\"" Sep 4 17:29:51.278042 containerd[1450]: time="2024-09-04T17:29:51.278021156Z" level=info msg="StartContainer for \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\"" Sep 4 17:29:51.309946 systemd[1]: Started cri-containerd-e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a.scope - libcontainer container e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a. Sep 4 17:29:51.338896 containerd[1450]: time="2024-09-04T17:29:51.338850608Z" level=info msg="StartContainer for \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\" returns successfully" Sep 4 17:29:51.477651 kubelet[2570]: I0904 17:29:51.477602 2570 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:29:51.491477 kubelet[2570]: I0904 17:29:51.491424 2570 topology_manager.go:215] "Topology Admit Handler" podUID="ff22952e-bd7e-4096-8636-f6635f08c4cf" podNamespace="kube-system" podName="coredns-76f75df574-x5tj6" Sep 4 17:29:51.493748 kubelet[2570]: I0904 17:29:51.493385 2570 topology_manager.go:215] "Topology Admit Handler" podUID="cdf4b992-bdbd-4adf-91d0-2e3aa9087500" podNamespace="kube-system" podName="coredns-76f75df574-srwg8" Sep 4 17:29:51.503063 systemd[1]: Created slice kubepods-burstable-podff22952e_bd7e_4096_8636_f6635f08c4cf.slice - libcontainer container kubepods-burstable-podff22952e_bd7e_4096_8636_f6635f08c4cf.slice. Sep 4 17:29:51.512688 systemd[1]: Created slice kubepods-burstable-podcdf4b992_bdbd_4adf_91d0_2e3aa9087500.slice - libcontainer container kubepods-burstable-podcdf4b992_bdbd_4adf_91d0_2e3aa9087500.slice. Sep 4 17:29:51.552217 kubelet[2570]: I0904 17:29:51.552152 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw4jd\" (UniqueName: \"kubernetes.io/projected/cdf4b992-bdbd-4adf-91d0-2e3aa9087500-kube-api-access-mw4jd\") pod \"coredns-76f75df574-srwg8\" (UID: \"cdf4b992-bdbd-4adf-91d0-2e3aa9087500\") " pod="kube-system/coredns-76f75df574-srwg8" Sep 4 17:29:51.552217 kubelet[2570]: I0904 17:29:51.552228 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff22952e-bd7e-4096-8636-f6635f08c4cf-config-volume\") pod \"coredns-76f75df574-x5tj6\" (UID: \"ff22952e-bd7e-4096-8636-f6635f08c4cf\") " pod="kube-system/coredns-76f75df574-x5tj6" Sep 4 17:29:51.552396 kubelet[2570]: I0904 17:29:51.552263 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtnj\" (UniqueName: \"kubernetes.io/projected/ff22952e-bd7e-4096-8636-f6635f08c4cf-kube-api-access-xhtnj\") pod \"coredns-76f75df574-x5tj6\" (UID: \"ff22952e-bd7e-4096-8636-f6635f08c4cf\") " pod="kube-system/coredns-76f75df574-x5tj6" Sep 4 17:29:51.552396 kubelet[2570]: I0904 17:29:51.552295 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdf4b992-bdbd-4adf-91d0-2e3aa9087500-config-volume\") pod \"coredns-76f75df574-srwg8\" (UID: \"cdf4b992-bdbd-4adf-91d0-2e3aa9087500\") " pod="kube-system/coredns-76f75df574-srwg8" Sep 4 17:29:51.809703 kubelet[2570]: E0904 17:29:51.809592 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:51.810581 containerd[1450]: time="2024-09-04T17:29:51.810497192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x5tj6,Uid:ff22952e-bd7e-4096-8636-f6635f08c4cf,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:51.817547 kubelet[2570]: E0904 17:29:51.817517 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:51.817935 containerd[1450]: time="2024-09-04T17:29:51.817908150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-srwg8,Uid:cdf4b992-bdbd-4adf-91d0-2e3aa9087500,Namespace:kube-system,Attempt:0,}" Sep 4 17:29:52.260573 kubelet[2570]: E0904 17:29:52.260538 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:52.271122 kubelet[2570]: I0904 17:29:52.271051 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fv6mk" podStartSLOduration=7.025405331 podStartE2EDuration="16.270953939s" podCreationTimestamp="2024-09-04 17:29:36 +0000 UTC" firstStartedPulling="2024-09-04 17:29:37.322658089 +0000 UTC m=+15.231420305" lastFinishedPulling="2024-09-04 17:29:46.568206707 +0000 UTC m=+24.476968913" observedRunningTime="2024-09-04 17:29:52.270390993 +0000 UTC m=+30.179153199" watchObservedRunningTime="2024-09-04 17:29:52.270953939 +0000 UTC m=+30.179716156" Sep 4 17:29:53.262578 kubelet[2570]: E0904 17:29:53.262547 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:53.561411 systemd-networkd[1383]: cilium_host: Link UP Sep 4 17:29:53.562295 systemd-networkd[1383]: cilium_net: Link UP Sep 4 17:29:53.562510 systemd-networkd[1383]: cilium_net: Gained carrier Sep 4 17:29:53.562880 systemd-networkd[1383]: cilium_host: Gained carrier Sep 4 17:29:53.563210 systemd-networkd[1383]: cilium_net: Gained IPv6LL Sep 4 17:29:53.671947 systemd-networkd[1383]: cilium_vxlan: Link UP Sep 4 17:29:53.671956 systemd-networkd[1383]: cilium_vxlan: Gained carrier Sep 4 17:29:53.802983 systemd-networkd[1383]: cilium_host: Gained IPv6LL Sep 4 17:29:53.889838 kernel: NET: Registered PF_ALG protocol family Sep 4 17:29:54.264150 kubelet[2570]: E0904 17:29:54.264023 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:54.615110 systemd-networkd[1383]: lxc_health: Link UP Sep 4 17:29:54.626900 systemd-networkd[1383]: lxc_health: Gained carrier Sep 4 17:29:54.881528 systemd-networkd[1383]: lxca7d231d39445: Link UP Sep 4 17:29:54.887817 kernel: eth0: renamed from tmp6e2f7 Sep 4 17:29:54.897917 systemd-networkd[1383]: lxca7d231d39445: Gained carrier Sep 4 17:29:54.898187 systemd-networkd[1383]: lxc1215ad7df1f5: Link UP Sep 4 17:29:54.908828 kernel: eth0: renamed from tmpdfa0b Sep 4 17:29:54.920974 systemd-networkd[1383]: lxc1215ad7df1f5: Gained carrier Sep 4 17:29:55.057377 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:46616.service - OpenSSH per-connection server daemon (10.0.0.1:46616). Sep 4 17:29:55.082932 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Sep 4 17:29:55.099870 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 46616 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:29:55.101567 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:29:55.106256 systemd-logind[1425]: New session 11 of user core. Sep 4 17:29:55.112984 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:29:55.266817 kubelet[2570]: E0904 17:29:55.266334 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:55.319128 sshd[3799]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:55.321769 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:29:55.322284 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:46616.service: Deactivated successfully. Sep 4 17:29:55.324463 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:29:55.326769 systemd-logind[1425]: Removed session 11. Sep 4 17:29:56.107071 systemd-networkd[1383]: lxc_health: Gained IPv6LL Sep 4 17:29:56.170984 systemd-networkd[1383]: lxca7d231d39445: Gained IPv6LL Sep 4 17:29:56.747064 systemd-networkd[1383]: lxc1215ad7df1f5: Gained IPv6LL Sep 4 17:29:58.412972 containerd[1450]: time="2024-09-04T17:29:58.412730065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:58.412972 containerd[1450]: time="2024-09-04T17:29:58.412955839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:58.413398 containerd[1450]: time="2024-09-04T17:29:58.412975927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:58.413398 containerd[1450]: time="2024-09-04T17:29:58.413074822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:58.419820 containerd[1450]: time="2024-09-04T17:29:58.417954291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:58.419820 containerd[1450]: time="2024-09-04T17:29:58.418007741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:58.419820 containerd[1450]: time="2024-09-04T17:29:58.418018121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:58.420773 containerd[1450]: time="2024-09-04T17:29:58.420084319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:58.441960 systemd[1]: Started cri-containerd-dfa0b336b6fd6e98506b121cf7ae8d7f8d2645d0174fe321ba180b1c5172f7d2.scope - libcontainer container dfa0b336b6fd6e98506b121cf7ae8d7f8d2645d0174fe321ba180b1c5172f7d2. Sep 4 17:29:58.446878 systemd[1]: Started cri-containerd-6e2f7bb15f91b9e23ee6959bba2599d338497f270a5a4307ec3f7e89b4b0f8c6.scope - libcontainer container 6e2f7bb15f91b9e23ee6959bba2599d338497f270a5a4307ec3f7e89b4b0f8c6. Sep 4 17:29:58.454850 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:29:58.459659 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:29:58.482226 containerd[1450]: time="2024-09-04T17:29:58.482184881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x5tj6,Uid:ff22952e-bd7e-4096-8636-f6635f08c4cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfa0b336b6fd6e98506b121cf7ae8d7f8d2645d0174fe321ba180b1c5172f7d2\"" Sep 4 17:29:58.484652 kubelet[2570]: E0904 17:29:58.484159 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:58.487277 containerd[1450]: time="2024-09-04T17:29:58.486562518Z" level=info msg="CreateContainer within sandbox \"dfa0b336b6fd6e98506b121cf7ae8d7f8d2645d0174fe321ba180b1c5172f7d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:29:58.489373 containerd[1450]: time="2024-09-04T17:29:58.489321176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-srwg8,Uid:cdf4b992-bdbd-4adf-91d0-2e3aa9087500,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e2f7bb15f91b9e23ee6959bba2599d338497f270a5a4307ec3f7e89b4b0f8c6\"" Sep 4 17:29:58.490405 kubelet[2570]: E0904 17:29:58.490323 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:58.492303 containerd[1450]: time="2024-09-04T17:29:58.492269780Z" level=info msg="CreateContainer within sandbox \"6e2f7bb15f91b9e23ee6959bba2599d338497f270a5a4307ec3f7e89b4b0f8c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:29:58.698451 containerd[1450]: time="2024-09-04T17:29:58.698321587Z" level=info msg="CreateContainer within sandbox \"dfa0b336b6fd6e98506b121cf7ae8d7f8d2645d0174fe321ba180b1c5172f7d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc353fc9b65bec9cf387c9564228c7c9351b717401c87f6310fbf3d497b97dd4\"" Sep 4 17:29:58.699011 containerd[1450]: time="2024-09-04T17:29:58.698974102Z" level=info msg="StartContainer for \"cc353fc9b65bec9cf387c9564228c7c9351b717401c87f6310fbf3d497b97dd4\"" Sep 4 17:29:58.723825 containerd[1450]: time="2024-09-04T17:29:58.723743764Z" level=info msg="CreateContainer within sandbox \"6e2f7bb15f91b9e23ee6959bba2599d338497f270a5a4307ec3f7e89b4b0f8c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d3e08911345ce577eb6c058c0416e90574edb7181411c7bf494b49b7ebbaf83\"" Sep 4 17:29:58.724182 containerd[1450]: time="2024-09-04T17:29:58.724153283Z" level=info msg="StartContainer for \"1d3e08911345ce577eb6c058c0416e90574edb7181411c7bf494b49b7ebbaf83\"" Sep 4 17:29:58.725943 systemd[1]: Started cri-containerd-cc353fc9b65bec9cf387c9564228c7c9351b717401c87f6310fbf3d497b97dd4.scope - libcontainer container cc353fc9b65bec9cf387c9564228c7c9351b717401c87f6310fbf3d497b97dd4. Sep 4 17:29:58.751996 systemd[1]: Started cri-containerd-1d3e08911345ce577eb6c058c0416e90574edb7181411c7bf494b49b7ebbaf83.scope - libcontainer container 1d3e08911345ce577eb6c058c0416e90574edb7181411c7bf494b49b7ebbaf83. Sep 4 17:29:58.760891 containerd[1450]: time="2024-09-04T17:29:58.760852817Z" level=info msg="StartContainer for \"cc353fc9b65bec9cf387c9564228c7c9351b717401c87f6310fbf3d497b97dd4\" returns successfully" Sep 4 17:29:58.785579 containerd[1450]: time="2024-09-04T17:29:58.785522622Z" level=info msg="StartContainer for \"1d3e08911345ce577eb6c058c0416e90574edb7181411c7bf494b49b7ebbaf83\" returns successfully" Sep 4 17:29:59.273158 kubelet[2570]: E0904 17:29:59.273130 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:59.274504 kubelet[2570]: E0904 17:29:59.274393 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:29:59.281099 kubelet[2570]: I0904 17:29:59.280580 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-srwg8" podStartSLOduration=22.280544995 podStartE2EDuration="22.280544995s" podCreationTimestamp="2024-09-04 17:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:59.280350219 +0000 UTC m=+37.189112435" watchObservedRunningTime="2024-09-04 17:29:59.280544995 +0000 UTC m=+37.189307211" Sep 4 17:29:59.300322 kubelet[2570]: I0904 17:29:59.298652 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-x5tj6" podStartSLOduration=22.298606684 podStartE2EDuration="22.298606684s" podCreationTimestamp="2024-09-04 17:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:29:59.298112167 +0000 UTC m=+37.206874403" watchObservedRunningTime="2024-09-04 17:29:59.298606684 +0000 UTC m=+37.207368900" Sep 4 17:29:59.673776 kubelet[2570]: I0904 17:29:59.673724 2570 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:29:59.674640 kubelet[2570]: E0904 17:29:59.674605 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:00.276761 kubelet[2570]: E0904 17:30:00.276630 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:00.276940 kubelet[2570]: E0904 17:30:00.276909 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:00.277386 kubelet[2570]: E0904 17:30:00.277361 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:00.332560 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:41868.service - OpenSSH per-connection server daemon (10.0.0.1:41868). Sep 4 17:30:00.378482 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 41868 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:00.380445 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:00.384691 systemd-logind[1425]: New session 12 of user core. Sep 4 17:30:00.393923 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:30:00.556739 sshd[3999]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:00.561872 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:41868.service: Deactivated successfully. Sep 4 17:30:00.564170 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:30:00.564899 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:30:00.565941 systemd-logind[1425]: Removed session 12. Sep 4 17:30:01.278831 kubelet[2570]: E0904 17:30:01.278717 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:01.279503 kubelet[2570]: E0904 17:30:01.278954 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:05.572584 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:41882.service - OpenSSH per-connection server daemon (10.0.0.1:41882). Sep 4 17:30:05.610004 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 41882 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:05.611832 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:05.617006 systemd-logind[1425]: New session 13 of user core. Sep 4 17:30:05.628154 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:30:05.755403 sshd[4016]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:05.766294 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:41882.service: Deactivated successfully. Sep 4 17:30:05.768724 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:30:05.770594 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:30:05.778451 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:41892.service - OpenSSH per-connection server daemon (10.0.0.1:41892). Sep 4 17:30:05.779602 systemd-logind[1425]: Removed session 13. Sep 4 17:30:05.808123 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 41892 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:05.810093 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:05.815298 systemd-logind[1425]: New session 14 of user core. Sep 4 17:30:05.828127 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:30:06.005702 sshd[4031]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:06.018119 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:41892.service: Deactivated successfully. Sep 4 17:30:06.021515 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:30:06.023512 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:30:06.032222 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:41896.service - OpenSSH per-connection server daemon (10.0.0.1:41896). Sep 4 17:30:06.032847 systemd-logind[1425]: Removed session 14. Sep 4 17:30:06.068059 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 41896 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:06.069906 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:06.074137 systemd-logind[1425]: New session 15 of user core. Sep 4 17:30:06.086052 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:30:06.217991 sshd[4043]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:06.222072 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:41896.service: Deactivated successfully. Sep 4 17:30:06.224357 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:30:06.225101 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:30:06.226204 systemd-logind[1425]: Removed session 15. Sep 4 17:30:11.233121 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:39856.service - OpenSSH per-connection server daemon (10.0.0.1:39856). Sep 4 17:30:11.264678 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 39856 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:11.266327 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:11.270737 systemd-logind[1425]: New session 16 of user core. Sep 4 17:30:11.278981 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:30:11.386762 sshd[4059]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:11.391299 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:39856.service: Deactivated successfully. Sep 4 17:30:11.393581 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:30:11.394222 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:30:11.395122 systemd-logind[1425]: Removed session 16. Sep 4 17:30:16.401850 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:39866.service - OpenSSH per-connection server daemon (10.0.0.1:39866). Sep 4 17:30:16.433546 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 39866 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:16.435264 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:16.439263 systemd-logind[1425]: New session 17 of user core. Sep 4 17:30:16.444932 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:30:16.549735 sshd[4073]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:16.559727 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:39866.service: Deactivated successfully. Sep 4 17:30:16.561675 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:30:16.563353 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:30:16.570095 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:39872.service - OpenSSH per-connection server daemon (10.0.0.1:39872). Sep 4 17:30:16.571155 systemd-logind[1425]: Removed session 17. Sep 4 17:30:16.597967 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 39872 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:16.599759 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:16.603560 systemd-logind[1425]: New session 18 of user core. Sep 4 17:30:16.612923 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:30:16.844490 sshd[4087]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:16.859821 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:39872.service: Deactivated successfully. Sep 4 17:30:16.861778 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:30:16.863564 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:30:16.872058 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:39876.service - OpenSSH per-connection server daemon (10.0.0.1:39876). Sep 4 17:30:16.873067 systemd-logind[1425]: Removed session 18. Sep 4 17:30:16.903910 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 39876 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:16.905666 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:16.910129 systemd-logind[1425]: New session 19 of user core. Sep 4 17:30:16.919912 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:30:18.148864 sshd[4099]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:18.157891 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:39876.service: Deactivated successfully. Sep 4 17:30:18.160915 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:30:18.164329 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:30:18.175760 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:39638.service - OpenSSH per-connection server daemon (10.0.0.1:39638). Sep 4 17:30:18.176797 systemd-logind[1425]: Removed session 19. Sep 4 17:30:18.207205 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 39638 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:18.209275 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:18.213508 systemd-logind[1425]: New session 20 of user core. Sep 4 17:30:18.221923 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:30:18.518950 sshd[4120]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:18.528307 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:39638.service: Deactivated successfully. Sep 4 17:30:18.530171 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:30:18.531802 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:30:18.540165 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:39652.service - OpenSSH per-connection server daemon (10.0.0.1:39652). Sep 4 17:30:18.541594 systemd-logind[1425]: Removed session 20. Sep 4 17:30:18.567581 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 39652 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:18.569244 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:18.573269 systemd-logind[1425]: New session 21 of user core. Sep 4 17:30:18.582924 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:30:18.692965 sshd[4132]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:18.696772 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:39652.service: Deactivated successfully. Sep 4 17:30:18.698943 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:30:18.699655 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:30:18.700614 systemd-logind[1425]: Removed session 21. Sep 4 17:30:23.705169 systemd[1]: Started sshd@21-10.0.0.44:22-10.0.0.1:39656.service - OpenSSH per-connection server daemon (10.0.0.1:39656). Sep 4 17:30:23.736240 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 39656 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:23.737710 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:23.741907 systemd-logind[1425]: New session 22 of user core. Sep 4 17:30:23.751920 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:30:23.874303 sshd[4148]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:23.878777 systemd[1]: sshd@21-10.0.0.44:22-10.0.0.1:39656.service: Deactivated successfully. Sep 4 17:30:23.881212 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:30:23.882010 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:30:23.883072 systemd-logind[1425]: Removed session 22. Sep 4 17:30:28.885916 systemd[1]: Started sshd@22-10.0.0.44:22-10.0.0.1:39674.service - OpenSSH per-connection server daemon (10.0.0.1:39674). Sep 4 17:30:28.916816 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 39674 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:28.918828 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:28.922887 systemd-logind[1425]: New session 23 of user core. Sep 4 17:30:28.932929 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:30:29.042489 sshd[4165]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:29.047125 systemd[1]: sshd@22-10.0.0.44:22-10.0.0.1:39674.service: Deactivated successfully. Sep 4 17:30:29.049612 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:30:29.050347 systemd-logind[1425]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:30:29.051230 systemd-logind[1425]: Removed session 23. Sep 4 17:30:34.054047 systemd[1]: Started sshd@23-10.0.0.44:22-10.0.0.1:39680.service - OpenSSH per-connection server daemon (10.0.0.1:39680). Sep 4 17:30:34.085253 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 39680 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:34.086721 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:34.090897 systemd-logind[1425]: New session 24 of user core. Sep 4 17:30:34.105935 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:30:34.339050 sshd[4179]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:34.343729 systemd[1]: sshd@23-10.0.0.44:22-10.0.0.1:39680.service: Deactivated successfully. Sep 4 17:30:34.346042 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:30:34.346931 systemd-logind[1425]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:30:34.347969 systemd-logind[1425]: Removed session 24. Sep 4 17:30:39.351176 systemd[1]: Started sshd@24-10.0.0.44:22-10.0.0.1:41368.service - OpenSSH per-connection server daemon (10.0.0.1:41368). Sep 4 17:30:39.383612 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 41368 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:39.385218 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:39.389136 systemd-logind[1425]: New session 25 of user core. Sep 4 17:30:39.397952 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:30:39.504053 sshd[4195]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:39.528958 systemd[1]: sshd@24-10.0.0.44:22-10.0.0.1:41368.service: Deactivated successfully. Sep 4 17:30:39.530996 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:30:39.532704 systemd-logind[1425]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:30:39.539030 systemd[1]: Started sshd@25-10.0.0.44:22-10.0.0.1:41380.service - OpenSSH per-connection server daemon (10.0.0.1:41380). Sep 4 17:30:39.540039 systemd-logind[1425]: Removed session 25. Sep 4 17:30:39.565756 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 41380 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:39.567321 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:39.571380 systemd-logind[1425]: New session 26 of user core. Sep 4 17:30:39.580911 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:30:41.101532 containerd[1450]: time="2024-09-04T17:30:41.101484817Z" level=info msg="StopContainer for \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\" with timeout 30 (s)" Sep 4 17:30:41.102308 containerd[1450]: time="2024-09-04T17:30:41.101940947Z" level=info msg="Stop container \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\" with signal terminated" Sep 4 17:30:41.114280 systemd[1]: cri-containerd-c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede.scope: Deactivated successfully. Sep 4 17:30:41.132586 containerd[1450]: time="2024-09-04T17:30:41.132458529Z" level=info msg="StopContainer for \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\" with timeout 2 (s)" Sep 4 17:30:41.132979 containerd[1450]: time="2024-09-04T17:30:41.132955878Z" level=info msg="Stop container \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\" with signal terminated" Sep 4 17:30:41.133925 containerd[1450]: time="2024-09-04T17:30:41.133855523Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:30:41.136931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede-rootfs.mount: Deactivated successfully. Sep 4 17:30:41.141452 systemd-networkd[1383]: lxc_health: Link DOWN Sep 4 17:30:41.141460 systemd-networkd[1383]: lxc_health: Lost carrier Sep 4 17:30:41.154055 containerd[1450]: time="2024-09-04T17:30:41.153981888Z" level=info msg="shim disconnected" id=c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede namespace=k8s.io Sep 4 17:30:41.154235 containerd[1450]: time="2024-09-04T17:30:41.154053675Z" level=warning msg="cleaning up after shim disconnected" id=c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede namespace=k8s.io Sep 4 17:30:41.154235 containerd[1450]: time="2024-09-04T17:30:41.154066950Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:41.169104 systemd[1]: cri-containerd-e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a.scope: Deactivated successfully. Sep 4 17:30:41.169389 systemd[1]: cri-containerd-e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a.scope: Consumed 7.008s CPU time. Sep 4 17:30:41.171899 containerd[1450]: time="2024-09-04T17:30:41.171852783Z" level=info msg="StopContainer for \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\" returns successfully" Sep 4 17:30:41.172396 containerd[1450]: time="2024-09-04T17:30:41.172371622Z" level=info msg="StopPodSandbox for \"d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5\"" Sep 4 17:30:41.172449 containerd[1450]: time="2024-09-04T17:30:41.172399796Z" level=info msg="Container to stop \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:30:41.174350 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5-shm.mount: Deactivated successfully. Sep 4 17:30:41.180907 systemd[1]: cri-containerd-d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5.scope: Deactivated successfully. Sep 4 17:30:41.190835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a-rootfs.mount: Deactivated successfully. Sep 4 17:30:41.216136 containerd[1450]: time="2024-09-04T17:30:41.216067175Z" level=info msg="shim disconnected" id=d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5 namespace=k8s.io Sep 4 17:30:41.216455 containerd[1450]: time="2024-09-04T17:30:41.216398818Z" level=warning msg="cleaning up after shim disconnected" id=d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5 namespace=k8s.io Sep 4 17:30:41.216455 containerd[1450]: time="2024-09-04T17:30:41.216418044Z" level=info msg="shim disconnected" id=e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a namespace=k8s.io Sep 4 17:30:41.216824 containerd[1450]: time="2024-09-04T17:30:41.216452559Z" level=warning msg="cleaning up after shim disconnected" id=e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a namespace=k8s.io Sep 4 17:30:41.216824 containerd[1450]: time="2024-09-04T17:30:41.216464533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:41.216824 containerd[1450]: time="2024-09-04T17:30:41.216419857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:41.235942 containerd[1450]: time="2024-09-04T17:30:41.235891123Z" level=info msg="StopContainer for \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\" returns successfully" Sep 4 17:30:41.236480 containerd[1450]: time="2024-09-04T17:30:41.236455689Z" level=info msg="StopPodSandbox for \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\"" Sep 4 17:30:41.236543 containerd[1450]: time="2024-09-04T17:30:41.236490606Z" level=info msg="Container to stop \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:30:41.236543 containerd[1450]: time="2024-09-04T17:30:41.236506617Z" level=info msg="Container to stop \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:30:41.236543 containerd[1450]: time="2024-09-04T17:30:41.236520112Z" level=info msg="Container to stop \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:30:41.236543 containerd[1450]: time="2024-09-04T17:30:41.236532636Z" level=info msg="Container to stop \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:30:41.236687 containerd[1450]: time="2024-09-04T17:30:41.236544879Z" level=info msg="Container to stop \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:30:41.243677 systemd[1]: cri-containerd-2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672.scope: Deactivated successfully. Sep 4 17:30:41.251433 containerd[1450]: time="2024-09-04T17:30:41.251381862Z" level=info msg="TearDown network for sandbox \"d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5\" successfully" Sep 4 17:30:41.251433 containerd[1450]: time="2024-09-04T17:30:41.251427168Z" level=info msg="StopPodSandbox for \"d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5\" returns successfully" Sep 4 17:30:41.318393 containerd[1450]: time="2024-09-04T17:30:41.318252541Z" level=info msg="shim disconnected" id=2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672 namespace=k8s.io Sep 4 17:30:41.318393 containerd[1450]: time="2024-09-04T17:30:41.318322885Z" level=warning msg="cleaning up after shim disconnected" id=2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672 namespace=k8s.io Sep 4 17:30:41.318393 containerd[1450]: time="2024-09-04T17:30:41.318334477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:41.341124 containerd[1450]: time="2024-09-04T17:30:41.341064004Z" level=info msg="TearDown network for sandbox \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" successfully" Sep 4 17:30:41.341124 containerd[1450]: time="2024-09-04T17:30:41.341113058Z" level=info msg="StopPodSandbox for \"2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672\" returns successfully" Sep 4 17:30:41.347041 kubelet[2570]: I0904 17:30:41.347009 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvgq9\" (UniqueName: \"kubernetes.io/projected/302600ec-ac4e-4a62-96ba-076e0959b549-kube-api-access-gvgq9\") pod \"302600ec-ac4e-4a62-96ba-076e0959b549\" (UID: \"302600ec-ac4e-4a62-96ba-076e0959b549\") " Sep 4 17:30:41.347041 kubelet[2570]: I0904 17:30:41.347041 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/302600ec-ac4e-4a62-96ba-076e0959b549-cilium-config-path\") pod \"302600ec-ac4e-4a62-96ba-076e0959b549\" (UID: \"302600ec-ac4e-4a62-96ba-076e0959b549\") " Sep 4 17:30:41.350528 kubelet[2570]: I0904 17:30:41.350483 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/302600ec-ac4e-4a62-96ba-076e0959b549-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "302600ec-ac4e-4a62-96ba-076e0959b549" (UID: "302600ec-ac4e-4a62-96ba-076e0959b549"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:30:41.350846 kubelet[2570]: I0904 17:30:41.350826 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302600ec-ac4e-4a62-96ba-076e0959b549-kube-api-access-gvgq9" (OuterVolumeSpecName: "kube-api-access-gvgq9") pod "302600ec-ac4e-4a62-96ba-076e0959b549" (UID: "302600ec-ac4e-4a62-96ba-076e0959b549"). InnerVolumeSpecName "kube-api-access-gvgq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:30:41.353117 kubelet[2570]: I0904 17:30:41.353052 2570 scope.go:117] "RemoveContainer" containerID="e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a" Sep 4 17:30:41.355325 containerd[1450]: time="2024-09-04T17:30:41.355060966Z" level=info msg="RemoveContainer for \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\"" Sep 4 17:30:41.357300 systemd[1]: Removed slice kubepods-besteffort-pod302600ec_ac4e_4a62_96ba_076e0959b549.slice - libcontainer container kubepods-besteffort-pod302600ec_ac4e_4a62_96ba_076e0959b549.slice. Sep 4 17:30:41.448006 kubelet[2570]: I0904 17:30:41.447962 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a99668-d619-41c4-b917-6185f65d0a91-cilium-config-path\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448006 kubelet[2570]: I0904 17:30:41.448002 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-hostproc\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448006 kubelet[2570]: I0904 17:30:41.448019 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-host-proc-sys-kernel\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448006 kubelet[2570]: I0904 17:30:41.448036 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-lib-modules\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448307 kubelet[2570]: I0904 17:30:41.448054 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cni-path\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448307 kubelet[2570]: I0904 17:30:41.448069 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-etc-cni-netd\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448307 kubelet[2570]: I0904 17:30:41.448086 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cilium-run\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448307 kubelet[2570]: I0904 17:30:41.448105 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a99668-d619-41c4-b917-6185f65d0a91-hubble-tls\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448307 kubelet[2570]: I0904 17:30:41.448126 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krjml\" (UniqueName: \"kubernetes.io/projected/81a99668-d619-41c4-b917-6185f65d0a91-kube-api-access-krjml\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448307 kubelet[2570]: I0904 17:30:41.448141 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-xtables-lock\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448520 kubelet[2570]: I0904 17:30:41.448134 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-hostproc" (OuterVolumeSpecName: "hostproc") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.448520 kubelet[2570]: I0904 17:30:41.448156 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.448520 kubelet[2570]: I0904 17:30:41.448197 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.448520 kubelet[2570]: I0904 17:30:41.448161 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a99668-d619-41c4-b917-6185f65d0a91-clustermesh-secrets\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448520 kubelet[2570]: I0904 17:30:41.448271 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-host-proc-sys-net\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448698 kubelet[2570]: I0904 17:30:41.448321 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-bpf-maps\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448698 kubelet[2570]: I0904 17:30:41.448348 2570 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cilium-cgroup\") pod \"81a99668-d619-41c4-b917-6185f65d0a91\" (UID: \"81a99668-d619-41c4-b917-6185f65d0a91\") " Sep 4 17:30:41.448698 kubelet[2570]: I0904 17:30:41.448407 2570 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gvgq9\" (UniqueName: \"kubernetes.io/projected/302600ec-ac4e-4a62-96ba-076e0959b549-kube-api-access-gvgq9\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.448698 kubelet[2570]: I0904 17:30:41.448424 2570 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.448698 kubelet[2570]: I0904 17:30:41.448438 2570 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.448698 kubelet[2570]: I0904 17:30:41.448452 2570 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.448698 kubelet[2570]: I0904 17:30:41.448479 2570 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/302600ec-ac4e-4a62-96ba-076e0959b549-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.448962 kubelet[2570]: I0904 17:30:41.448501 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.448962 kubelet[2570]: I0904 17:30:41.448527 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.448962 kubelet[2570]: I0904 17:30:41.448549 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.448962 kubelet[2570]: I0904 17:30:41.448573 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.448962 kubelet[2570]: I0904 17:30:41.448606 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cni-path" (OuterVolumeSpecName: "cni-path") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.449129 kubelet[2570]: I0904 17:30:41.448628 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.451423 kubelet[2570]: I0904 17:30:41.451338 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a99668-d619-41c4-b917-6185f65d0a91-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:30:41.451423 kubelet[2570]: I0904 17:30:41.451396 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:30:41.452021 kubelet[2570]: I0904 17:30:41.451987 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a99668-d619-41c4-b917-6185f65d0a91-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:30:41.452021 kubelet[2570]: I0904 17:30:41.452011 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a99668-d619-41c4-b917-6185f65d0a91-kube-api-access-krjml" (OuterVolumeSpecName: "kube-api-access-krjml") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "kube-api-access-krjml". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:30:41.452410 kubelet[2570]: I0904 17:30:41.452379 2570 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a99668-d619-41c4-b917-6185f65d0a91-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "81a99668-d619-41c4-b917-6185f65d0a91" (UID: "81a99668-d619-41c4-b917-6185f65d0a91"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:30:41.455317 containerd[1450]: time="2024-09-04T17:30:41.455273190Z" level=info msg="RemoveContainer for \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\" returns successfully" Sep 4 17:30:41.455557 kubelet[2570]: I0904 17:30:41.455536 2570 scope.go:117] "RemoveContainer" containerID="08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48" Sep 4 17:30:41.456938 containerd[1450]: time="2024-09-04T17:30:41.456842752Z" level=info msg="RemoveContainer for \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\"" Sep 4 17:30:41.479050 containerd[1450]: time="2024-09-04T17:30:41.478992294Z" level=info msg="RemoveContainer for \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\" returns successfully" Sep 4 17:30:41.479222 kubelet[2570]: I0904 17:30:41.479184 2570 scope.go:117] "RemoveContainer" containerID="4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62" Sep 4 17:30:41.480180 containerd[1450]: time="2024-09-04T17:30:41.480135252Z" level=info msg="RemoveContainer for \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\"" Sep 4 17:30:41.492833 containerd[1450]: time="2024-09-04T17:30:41.492781689Z" level=info msg="RemoveContainer for \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\" returns successfully" Sep 4 17:30:41.493055 kubelet[2570]: I0904 17:30:41.493012 2570 scope.go:117] "RemoveContainer" containerID="dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33" Sep 4 17:30:41.493901 containerd[1450]: time="2024-09-04T17:30:41.493872048Z" level=info msg="RemoveContainer for \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\"" Sep 4 17:30:41.500128 containerd[1450]: time="2024-09-04T17:30:41.500089209Z" level=info msg="RemoveContainer for \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\" returns successfully" Sep 4 17:30:41.500353 kubelet[2570]: I0904 17:30:41.500324 2570 scope.go:117] "RemoveContainer" containerID="d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1" Sep 4 17:30:41.501520 containerd[1450]: time="2024-09-04T17:30:41.501478858Z" level=info msg="RemoveContainer for \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\"" Sep 4 17:30:41.505944 containerd[1450]: time="2024-09-04T17:30:41.505884396Z" level=info msg="RemoveContainer for \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\" returns successfully" Sep 4 17:30:41.506152 kubelet[2570]: I0904 17:30:41.506113 2570 scope.go:117] "RemoveContainer" containerID="e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a" Sep 4 17:30:41.509006 containerd[1450]: time="2024-09-04T17:30:41.508933017Z" level=error msg="ContainerStatus for \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\": not found" Sep 4 17:30:41.519869 kubelet[2570]: E0904 17:30:41.519826 2570 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\": not found" containerID="e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a" Sep 4 17:30:41.519977 kubelet[2570]: I0904 17:30:41.519942 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a"} err="failed to get container status \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e06a22fefce2f0924d295cc47cbf2beb1e1c814e318d18552ca0da7d3913dd0a\": not found" Sep 4 17:30:41.519977 kubelet[2570]: I0904 17:30:41.519958 2570 scope.go:117] "RemoveContainer" containerID="08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48" Sep 4 17:30:41.520291 containerd[1450]: time="2024-09-04T17:30:41.520245902Z" level=error msg="ContainerStatus for \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\": not found" Sep 4 17:30:41.520508 kubelet[2570]: E0904 17:30:41.520468 2570 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\": not found" containerID="08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48" Sep 4 17:30:41.520508 kubelet[2570]: I0904 17:30:41.520503 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48"} err="failed to get container status \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\": rpc error: code = NotFound desc = an error occurred when try to find container \"08b558dfd3c529abdb7a3a778b6b91ac8a629fc96fcb7896bf850e83dad5df48\": not found" Sep 4 17:30:41.520601 kubelet[2570]: I0904 17:30:41.520517 2570 scope.go:117] "RemoveContainer" containerID="4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62" Sep 4 17:30:41.520877 containerd[1450]: time="2024-09-04T17:30:41.520819965Z" level=error msg="ContainerStatus for \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\": not found" Sep 4 17:30:41.521028 kubelet[2570]: E0904 17:30:41.521000 2570 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\": not found" containerID="4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62" Sep 4 17:30:41.521085 kubelet[2570]: I0904 17:30:41.521043 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62"} err="failed to get container status \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c9011d1c2a90d7ebdf9376854f00729239fae77f34cbb84fe130fb84a939f62\": not found" Sep 4 17:30:41.521085 kubelet[2570]: I0904 17:30:41.521060 2570 scope.go:117] "RemoveContainer" containerID="dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33" Sep 4 17:30:41.521300 containerd[1450]: time="2024-09-04T17:30:41.521254053Z" level=error msg="ContainerStatus for \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\": not found" Sep 4 17:30:41.521455 kubelet[2570]: E0904 17:30:41.521433 2570 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\": not found" containerID="dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33" Sep 4 17:30:41.521513 kubelet[2570]: I0904 17:30:41.521465 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33"} err="failed to get container status \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbafc0fd16f0a99757b9060e0b5495fb296199b3cbc299b133c5aaa024015f33\": not found" Sep 4 17:30:41.521513 kubelet[2570]: I0904 17:30:41.521478 2570 scope.go:117] "RemoveContainer" containerID="d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1" Sep 4 17:30:41.521733 containerd[1450]: time="2024-09-04T17:30:41.521683071Z" level=error msg="ContainerStatus for \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\": not found" Sep 4 17:30:41.521886 kubelet[2570]: E0904 17:30:41.521868 2570 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\": not found" containerID="d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1" Sep 4 17:30:41.521929 kubelet[2570]: I0904 17:30:41.521897 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1"} err="failed to get container status \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d91501fc9b9e1ed68b024c16bc9f323a495adff711e3095aeb74947922f183c1\": not found" Sep 4 17:30:41.521929 kubelet[2570]: I0904 17:30:41.521909 2570 scope.go:117] "RemoveContainer" containerID="c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede" Sep 4 17:30:41.523052 containerd[1450]: time="2024-09-04T17:30:41.523025159Z" level=info msg="RemoveContainer for \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\"" Sep 4 17:30:41.526801 containerd[1450]: time="2024-09-04T17:30:41.526755741Z" level=info msg="RemoveContainer for \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\" returns successfully" Sep 4 17:30:41.527031 kubelet[2570]: I0904 17:30:41.526959 2570 scope.go:117] "RemoveContainer" containerID="c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede" Sep 4 17:30:41.527139 containerd[1450]: time="2024-09-04T17:30:41.527109505Z" level=error msg="ContainerStatus for \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\": not found" Sep 4 17:30:41.527302 kubelet[2570]: E0904 17:30:41.527277 2570 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\": not found" containerID="c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede" Sep 4 17:30:41.527351 kubelet[2570]: I0904 17:30:41.527327 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede"} err="failed to get container status \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1ab2cc1cce282ead9893c270548006000afdab456d1d125213e1c955b92aede\": not found" Sep 4 17:30:41.549615 kubelet[2570]: I0904 17:30:41.549548 2570 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.549615 kubelet[2570]: I0904 17:30:41.549601 2570 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.549615 kubelet[2570]: I0904 17:30:41.549616 2570 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a99668-d619-41c4-b917-6185f65d0a91-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.549615 kubelet[2570]: I0904 17:30:41.549632 2570 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-krjml\" (UniqueName: \"kubernetes.io/projected/81a99668-d619-41c4-b917-6185f65d0a91-kube-api-access-krjml\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.550050 kubelet[2570]: I0904 17:30:41.549647 2570 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.550050 kubelet[2570]: I0904 17:30:41.549659 2570 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a99668-d619-41c4-b917-6185f65d0a91-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.550050 kubelet[2570]: I0904 17:30:41.549672 2570 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.550050 kubelet[2570]: I0904 17:30:41.549686 2570 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.550050 kubelet[2570]: I0904 17:30:41.549699 2570 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.550050 kubelet[2570]: I0904 17:30:41.549712 2570 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a99668-d619-41c4-b917-6185f65d0a91-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.550050 kubelet[2570]: I0904 17:30:41.549736 2570 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a99668-d619-41c4-b917-6185f65d0a91-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:30:41.659831 systemd[1]: Removed slice kubepods-burstable-pod81a99668_d619_41c4_b917_6185f65d0a91.slice - libcontainer container kubepods-burstable-pod81a99668_d619_41c4_b917_6185f65d0a91.slice. Sep 4 17:30:41.660098 systemd[1]: kubepods-burstable-pod81a99668_d619_41c4_b917_6185f65d0a91.slice: Consumed 7.109s CPU time. Sep 4 17:30:42.109399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d07f6e91d28fc758aaf7aaadd16c05c999b4899ab303dffedd76ba5f230a5ef5-rootfs.mount: Deactivated successfully. Sep 4 17:30:42.109505 systemd[1]: var-lib-kubelet-pods-302600ec\x2dac4e\x2d4a62\x2d96ba\x2d076e0959b549-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgvgq9.mount: Deactivated successfully. Sep 4 17:30:42.109589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672-rootfs.mount: Deactivated successfully. Sep 4 17:30:42.109680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2254547068ae61eaddcc89f7104525494c1a5beb9affcb97a3783a3c02863672-shm.mount: Deactivated successfully. Sep 4 17:30:42.109819 systemd[1]: var-lib-kubelet-pods-81a99668\x2dd619\x2d41c4\x2db917\x2d6185f65d0a91-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkrjml.mount: Deactivated successfully. Sep 4 17:30:42.109913 systemd[1]: var-lib-kubelet-pods-81a99668\x2dd619\x2d41c4\x2db917\x2d6185f65d0a91-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:30:42.109995 systemd[1]: var-lib-kubelet-pods-81a99668\x2dd619\x2d41c4\x2db917\x2d6185f65d0a91-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:30:42.191466 kubelet[2570]: I0904 17:30:42.191424 2570 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="302600ec-ac4e-4a62-96ba-076e0959b549" path="/var/lib/kubelet/pods/302600ec-ac4e-4a62-96ba-076e0959b549/volumes" Sep 4 17:30:42.192107 kubelet[2570]: I0904 17:30:42.192086 2570 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81a99668-d619-41c4-b917-6185f65d0a91" path="/var/lib/kubelet/pods/81a99668-d619-41c4-b917-6185f65d0a91/volumes" Sep 4 17:30:42.242926 kubelet[2570]: E0904 17:30:42.242896 2570 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:30:43.072529 sshd[4209]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:43.086512 systemd[1]: sshd@25-10.0.0.44:22-10.0.0.1:41380.service: Deactivated successfully. Sep 4 17:30:43.088847 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:30:43.091004 systemd-logind[1425]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:30:43.101553 systemd[1]: Started sshd@26-10.0.0.44:22-10.0.0.1:41392.service - OpenSSH per-connection server daemon (10.0.0.1:41392). Sep 4 17:30:43.102806 systemd-logind[1425]: Removed session 26. Sep 4 17:30:43.134916 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 41392 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:43.136838 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:43.141496 systemd-logind[1425]: New session 27 of user core. Sep 4 17:30:43.152925 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:30:43.714620 kubelet[2570]: I0904 17:30:43.714571 2570 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:30:43Z","lastTransitionTime":"2024-09-04T17:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:30:43.816703 sshd[4368]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:43.827421 systemd[1]: sshd@26-10.0.0.44:22-10.0.0.1:41392.service: Deactivated successfully. Sep 4 17:30:43.833292 kubelet[2570]: I0904 17:30:43.831345 2570 topology_manager.go:215] "Topology Admit Handler" podUID="430d2b04-6e83-4e28-b7f0-fd8a4ee05c56" podNamespace="kube-system" podName="cilium-r6spk" Sep 4 17:30:43.833292 kubelet[2570]: E0904 17:30:43.831419 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a99668-d619-41c4-b917-6185f65d0a91" containerName="mount-cgroup" Sep 4 17:30:43.833292 kubelet[2570]: E0904 17:30:43.831429 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a99668-d619-41c4-b917-6185f65d0a91" containerName="mount-bpf-fs" Sep 4 17:30:43.833292 kubelet[2570]: E0904 17:30:43.831436 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="302600ec-ac4e-4a62-96ba-076e0959b549" containerName="cilium-operator" Sep 4 17:30:43.833292 kubelet[2570]: E0904 17:30:43.831443 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a99668-d619-41c4-b917-6185f65d0a91" containerName="clean-cilium-state" Sep 4 17:30:43.833292 kubelet[2570]: E0904 17:30:43.831450 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a99668-d619-41c4-b917-6185f65d0a91" containerName="cilium-agent" Sep 4 17:30:43.833292 kubelet[2570]: E0904 17:30:43.831457 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81a99668-d619-41c4-b917-6185f65d0a91" containerName="apply-sysctl-overwrites" Sep 4 17:30:43.833292 kubelet[2570]: I0904 17:30:43.831481 2570 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a99668-d619-41c4-b917-6185f65d0a91" containerName="cilium-agent" Sep 4 17:30:43.833292 kubelet[2570]: I0904 17:30:43.831487 2570 memory_manager.go:354] "RemoveStaleState removing state" podUID="302600ec-ac4e-4a62-96ba-076e0959b549" containerName="cilium-operator" Sep 4 17:30:43.834955 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:30:43.838840 systemd-logind[1425]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:30:43.850101 systemd[1]: Started sshd@27-10.0.0.44:22-10.0.0.1:41404.service - OpenSSH per-connection server daemon (10.0.0.1:41404). Sep 4 17:30:43.855331 systemd-logind[1425]: Removed session 27. Sep 4 17:30:43.862918 systemd[1]: Created slice kubepods-burstable-pod430d2b04_6e83_4e28_b7f0_fd8a4ee05c56.slice - libcontainer container kubepods-burstable-pod430d2b04_6e83_4e28_b7f0_fd8a4ee05c56.slice. Sep 4 17:30:43.881878 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 41404 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:43.882555 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:43.891643 systemd-logind[1425]: New session 28 of user core. Sep 4 17:30:43.897564 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:30:43.961999 sshd[4382]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:43.965372 kubelet[2570]: I0904 17:30:43.965202 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-host-proc-sys-net\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965372 kubelet[2570]: I0904 17:30:43.965268 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hj4\" (UniqueName: \"kubernetes.io/projected/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-kube-api-access-w6hj4\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965372 kubelet[2570]: I0904 17:30:43.965350 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-cilium-run\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965624 kubelet[2570]: I0904 17:30:43.965398 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-etc-cni-netd\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965624 kubelet[2570]: I0904 17:30:43.965433 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-host-proc-sys-kernel\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965624 kubelet[2570]: I0904 17:30:43.965461 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-hubble-tls\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965624 kubelet[2570]: I0904 17:30:43.965490 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-cilium-config-path\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965624 kubelet[2570]: I0904 17:30:43.965518 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-cilium-cgroup\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965624 kubelet[2570]: I0904 17:30:43.965544 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-clustermesh-secrets\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965887 kubelet[2570]: I0904 17:30:43.965575 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-cilium-ipsec-secrets\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965887 kubelet[2570]: I0904 17:30:43.965608 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-bpf-maps\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965887 kubelet[2570]: I0904 17:30:43.965638 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-xtables-lock\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965887 kubelet[2570]: I0904 17:30:43.965668 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-hostproc\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965887 kubelet[2570]: I0904 17:30:43.965697 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-cni-path\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.965887 kubelet[2570]: I0904 17:30:43.965725 2570 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/430d2b04-6e83-4e28-b7f0-fd8a4ee05c56-lib-modules\") pod \"cilium-r6spk\" (UID: \"430d2b04-6e83-4e28-b7f0-fd8a4ee05c56\") " pod="kube-system/cilium-r6spk" Sep 4 17:30:43.980356 systemd[1]: sshd@27-10.0.0.44:22-10.0.0.1:41404.service: Deactivated successfully. Sep 4 17:30:43.983634 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:30:43.986226 systemd-logind[1425]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:30:43.995179 systemd[1]: Started sshd@28-10.0.0.44:22-10.0.0.1:41406.service - OpenSSH per-connection server daemon (10.0.0.1:41406). Sep 4 17:30:43.996325 systemd-logind[1425]: Removed session 28. Sep 4 17:30:44.029760 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 41406 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:30:44.032088 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:30:44.038076 systemd-logind[1425]: New session 29 of user core. Sep 4 17:30:44.046959 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 17:30:44.167930 kubelet[2570]: E0904 17:30:44.167841 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:44.169017 containerd[1450]: time="2024-09-04T17:30:44.168613514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r6spk,Uid:430d2b04-6e83-4e28-b7f0-fd8a4ee05c56,Namespace:kube-system,Attempt:0,}" Sep 4 17:30:44.196544 containerd[1450]: time="2024-09-04T17:30:44.196214385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:30:44.196544 containerd[1450]: time="2024-09-04T17:30:44.196306930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:30:44.196544 containerd[1450]: time="2024-09-04T17:30:44.196346546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:44.197634 containerd[1450]: time="2024-09-04T17:30:44.197528636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:30:44.222987 systemd[1]: Started cri-containerd-f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843.scope - libcontainer container f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843. Sep 4 17:30:44.252861 containerd[1450]: time="2024-09-04T17:30:44.252769083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r6spk,Uid:430d2b04-6e83-4e28-b7f0-fd8a4ee05c56,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\"" Sep 4 17:30:44.253941 kubelet[2570]: E0904 17:30:44.253909 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:44.256414 containerd[1450]: time="2024-09-04T17:30:44.256354569Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:30:44.275067 containerd[1450]: time="2024-09-04T17:30:44.275001623Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91c1f3f13d2b94af393da54a26cf89905ded7dbaf49a91198bddf14357bb7581\"" Sep 4 17:30:44.275725 containerd[1450]: time="2024-09-04T17:30:44.275695504Z" level=info msg="StartContainer for \"91c1f3f13d2b94af393da54a26cf89905ded7dbaf49a91198bddf14357bb7581\"" Sep 4 17:30:44.313101 systemd[1]: Started cri-containerd-91c1f3f13d2b94af393da54a26cf89905ded7dbaf49a91198bddf14357bb7581.scope - libcontainer container 91c1f3f13d2b94af393da54a26cf89905ded7dbaf49a91198bddf14357bb7581. Sep 4 17:30:44.376140 systemd[1]: cri-containerd-91c1f3f13d2b94af393da54a26cf89905ded7dbaf49a91198bddf14357bb7581.scope: Deactivated successfully. Sep 4 17:30:44.386458 containerd[1450]: time="2024-09-04T17:30:44.386390954Z" level=info msg="StartContainer for \"91c1f3f13d2b94af393da54a26cf89905ded7dbaf49a91198bddf14357bb7581\" returns successfully" Sep 4 17:30:44.587187 containerd[1450]: time="2024-09-04T17:30:44.586996048Z" level=info msg="shim disconnected" id=91c1f3f13d2b94af393da54a26cf89905ded7dbaf49a91198bddf14357bb7581 namespace=k8s.io Sep 4 17:30:44.587187 containerd[1450]: time="2024-09-04T17:30:44.587055181Z" level=warning msg="cleaning up after shim disconnected" id=91c1f3f13d2b94af393da54a26cf89905ded7dbaf49a91198bddf14357bb7581 namespace=k8s.io Sep 4 17:30:44.587187 containerd[1450]: time="2024-09-04T17:30:44.587064629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:45.393096 kubelet[2570]: E0904 17:30:45.392854 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:45.396005 containerd[1450]: time="2024-09-04T17:30:45.395282700Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:30:45.433390 containerd[1450]: time="2024-09-04T17:30:45.433309505Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563\"" Sep 4 17:30:45.434123 containerd[1450]: time="2024-09-04T17:30:45.434077607Z" level=info msg="StartContainer for \"0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563\"" Sep 4 17:30:45.473091 systemd[1]: Started cri-containerd-0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563.scope - libcontainer container 0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563. Sep 4 17:30:45.505405 containerd[1450]: time="2024-09-04T17:30:45.505319219Z" level=info msg="StartContainer for \"0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563\" returns successfully" Sep 4 17:30:45.514031 systemd[1]: cri-containerd-0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563.scope: Deactivated successfully. Sep 4 17:30:45.535727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563-rootfs.mount: Deactivated successfully. Sep 4 17:30:45.540661 containerd[1450]: time="2024-09-04T17:30:45.540593174Z" level=info msg="shim disconnected" id=0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563 namespace=k8s.io Sep 4 17:30:45.540661 containerd[1450]: time="2024-09-04T17:30:45.540650483Z" level=warning msg="cleaning up after shim disconnected" id=0f398ea35f7298f2b08eb7d5e795a95a1103a3831558df3e7e2f74f602276563 namespace=k8s.io Sep 4 17:30:45.540661 containerd[1450]: time="2024-09-04T17:30:45.540659230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:46.189587 kubelet[2570]: E0904 17:30:46.189550 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:46.396754 kubelet[2570]: E0904 17:30:46.396701 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:46.399204 containerd[1450]: time="2024-09-04T17:30:46.399168688Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:30:46.420261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936139832.mount: Deactivated successfully. Sep 4 17:30:46.422684 containerd[1450]: time="2024-09-04T17:30:46.422643120Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed\"" Sep 4 17:30:46.423297 containerd[1450]: time="2024-09-04T17:30:46.423218665Z" level=info msg="StartContainer for \"03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed\"" Sep 4 17:30:46.454999 systemd[1]: Started cri-containerd-03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed.scope - libcontainer container 03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed. Sep 4 17:30:46.492585 systemd[1]: cri-containerd-03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed.scope: Deactivated successfully. Sep 4 17:30:46.562773 containerd[1450]: time="2024-09-04T17:30:46.562675373Z" level=info msg="StartContainer for \"03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed\" returns successfully" Sep 4 17:30:46.588698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed-rootfs.mount: Deactivated successfully. Sep 4 17:30:46.734974 containerd[1450]: time="2024-09-04T17:30:46.734816136Z" level=info msg="shim disconnected" id=03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed namespace=k8s.io Sep 4 17:30:46.734974 containerd[1450]: time="2024-09-04T17:30:46.734874446Z" level=warning msg="cleaning up after shim disconnected" id=03cb04d55004a860712761eeafc4e992703a4a56d206d756e712b31c065235ed namespace=k8s.io Sep 4 17:30:46.734974 containerd[1450]: time="2024-09-04T17:30:46.734885368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:47.244440 kubelet[2570]: E0904 17:30:47.244396 2570 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:30:47.401274 kubelet[2570]: E0904 17:30:47.401228 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:47.404563 containerd[1450]: time="2024-09-04T17:30:47.404518756Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:30:47.441753 containerd[1450]: time="2024-09-04T17:30:47.441554836Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2\"" Sep 4 17:30:47.442627 containerd[1450]: time="2024-09-04T17:30:47.442573353Z" level=info msg="StartContainer for \"24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2\"" Sep 4 17:30:47.492158 systemd[1]: Started cri-containerd-24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2.scope - libcontainer container 24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2. Sep 4 17:30:47.529437 systemd[1]: cri-containerd-24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2.scope: Deactivated successfully. Sep 4 17:30:47.536542 containerd[1450]: time="2024-09-04T17:30:47.536475198Z" level=info msg="StartContainer for \"24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2\" returns successfully" Sep 4 17:30:47.568200 containerd[1450]: time="2024-09-04T17:30:47.568127878Z" level=info msg="shim disconnected" id=24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2 namespace=k8s.io Sep 4 17:30:47.568200 containerd[1450]: time="2024-09-04T17:30:47.568191760Z" level=warning msg="cleaning up after shim disconnected" id=24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2 namespace=k8s.io Sep 4 17:30:47.568200 containerd[1450]: time="2024-09-04T17:30:47.568202430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:30:48.407095 kubelet[2570]: E0904 17:30:48.407057 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:48.411604 containerd[1450]: time="2024-09-04T17:30:48.411449093Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:30:48.428337 systemd[1]: run-containerd-runc-k8s.io-24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2-runc.J46LN2.mount: Deactivated successfully. Sep 4 17:30:48.428474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24cfb3abe432139e67bbcb1b508ec7e4bc08c21e2c82ef96446b6c445a6862c2-rootfs.mount: Deactivated successfully. Sep 4 17:30:48.430617 containerd[1450]: time="2024-09-04T17:30:48.430572782Z" level=info msg="CreateContainer within sandbox \"f2c95a2fc35623042b48034ac3916e21234c17fc98a4d9292726c99bd22f6843\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb5a7cb46bba3104538e11d0b0f40560d886871d96d62efe643f4a9a248db454\"" Sep 4 17:30:48.431313 containerd[1450]: time="2024-09-04T17:30:48.431266100Z" level=info msg="StartContainer for \"bb5a7cb46bba3104538e11d0b0f40560d886871d96d62efe643f4a9a248db454\"" Sep 4 17:30:48.458307 systemd[1]: run-containerd-runc-k8s.io-bb5a7cb46bba3104538e11d0b0f40560d886871d96d62efe643f4a9a248db454-runc.YQkrRQ.mount: Deactivated successfully. Sep 4 17:30:48.469175 systemd[1]: Started cri-containerd-bb5a7cb46bba3104538e11d0b0f40560d886871d96d62efe643f4a9a248db454.scope - libcontainer container bb5a7cb46bba3104538e11d0b0f40560d886871d96d62efe643f4a9a248db454. Sep 4 17:30:48.567052 containerd[1450]: time="2024-09-04T17:30:48.566979361Z" level=info msg="StartContainer for \"bb5a7cb46bba3104538e11d0b0f40560d886871d96d62efe643f4a9a248db454\" returns successfully" Sep 4 17:30:49.024874 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 17:30:49.412577 kubelet[2570]: E0904 17:30:49.412525 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:50.414426 kubelet[2570]: E0904 17:30:50.414379 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:50.450196 systemd[1]: run-containerd-runc-k8s.io-bb5a7cb46bba3104538e11d0b0f40560d886871d96d62efe643f4a9a248db454-runc.qEqCSs.mount: Deactivated successfully. Sep 4 17:30:51.189472 kubelet[2570]: E0904 17:30:51.189395 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:51.416655 kubelet[2570]: E0904 17:30:51.416608 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:52.856361 systemd-networkd[1383]: lxc_health: Link UP Sep 4 17:30:52.878092 systemd-networkd[1383]: lxc_health: Gained carrier Sep 4 17:30:54.171365 kubelet[2570]: E0904 17:30:54.170523 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:54.212824 kubelet[2570]: I0904 17:30:54.211219 2570 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r6spk" podStartSLOduration=11.211161364 podStartE2EDuration="11.211161364s" podCreationTimestamp="2024-09-04 17:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:30:49.613573562 +0000 UTC m=+87.522335778" watchObservedRunningTime="2024-09-04 17:30:54.211161364 +0000 UTC m=+92.119923610" Sep 4 17:30:54.435822 kubelet[2570]: E0904 17:30:54.435599 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:54.735108 systemd-networkd[1383]: lxc_health: Gained IPv6LL Sep 4 17:30:55.439247 kubelet[2570]: E0904 17:30:55.439199 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:58.190830 kubelet[2570]: E0904 17:30:58.189689 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:30:59.678561 sshd[4392]: pam_unix(sshd:session): session closed for user core Sep 4 17:30:59.702534 systemd[1]: sshd@28-10.0.0.44:22-10.0.0.1:41406.service: Deactivated successfully. Sep 4 17:30:59.708364 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 17:30:59.709927 systemd-logind[1425]: Session 29 logged out. Waiting for processes to exit. Sep 4 17:30:59.711502 systemd-logind[1425]: Removed session 29. Sep 4 17:31:00.192333 kubelet[2570]: E0904 17:31:00.191695 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"