Sep 4 17:36:28.896651 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:36:28.896671 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:36:28.896682 kernel: BIOS-provided physical RAM map: Sep 4 17:36:28.896689 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:36:28.896695 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 17:36:28.896701 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 17:36:28.896708 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 17:36:28.896714 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 17:36:28.896720 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 4 17:36:28.896727 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 4 17:36:28.896735 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 4 17:36:28.896742 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 4 17:36:28.896748 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 4 17:36:28.896754 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 4 17:36:28.896762 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 4 17:36:28.896773 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 17:36:28.896783 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 4 17:36:28.896789 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 4 17:36:28.896796 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 17:36:28.896802 kernel: NX (Execute Disable) protection: active Sep 4 17:36:28.896811 kernel: APIC: Static calls initialized Sep 4 17:36:28.896818 kernel: efi: EFI v2.7 by EDK II Sep 4 17:36:28.896825 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b775198 Sep 4 17:36:28.896832 kernel: SMBIOS 2.8 present. Sep 4 17:36:28.896838 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Sep 4 17:36:28.896845 kernel: Hypervisor detected: KVM Sep 4 17:36:28.896852 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:36:28.896861 kernel: kvm-clock: using sched offset of 4815092715 cycles Sep 4 17:36:28.896868 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:36:28.896875 kernel: tsc: Detected 2794.744 MHz processor Sep 4 17:36:28.896882 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:36:28.896889 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:36:28.896896 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 4 17:36:28.896927 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:36:28.896934 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:36:28.896941 kernel: Using GB pages for direct mapping Sep 4 17:36:28.896951 kernel: Secure boot disabled Sep 4 17:36:28.896958 kernel: ACPI: Early table checksum verification disabled Sep 4 17:36:28.896964 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 17:36:28.896972 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:36:28.896982 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:36:28.896990 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:36:28.896997 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 17:36:28.897006 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:36:28.897014 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:36:28.897041 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:36:28.897048 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 17:36:28.897055 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Sep 4 17:36:28.897063 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Sep 4 17:36:28.897070 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 17:36:28.897080 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Sep 4 17:36:28.897087 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Sep 4 17:36:28.897094 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Sep 4 17:36:28.897101 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Sep 4 17:36:28.897108 kernel: No NUMA configuration found Sep 4 17:36:28.897115 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 4 17:36:28.897122 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 4 17:36:28.897129 kernel: Zone ranges: Sep 4 17:36:28.897136 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:36:28.897146 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 4 17:36:28.897153 kernel: Normal empty Sep 4 17:36:28.897160 kernel: Movable zone start for each node Sep 4 17:36:28.897167 kernel: Early memory node ranges Sep 4 17:36:28.897174 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:36:28.897182 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 17:36:28.897189 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 17:36:28.897196 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 4 17:36:28.897205 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 4 17:36:28.897213 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 4 17:36:28.897222 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 4 17:36:28.897229 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:36:28.897237 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:36:28.897244 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 17:36:28.897251 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:36:28.897260 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 4 17:36:28.897268 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 4 17:36:28.897278 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 4 17:36:28.897285 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:36:28.897295 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:36:28.897302 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:36:28.897309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:36:28.897317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:36:28.897324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:36:28.897331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:36:28.897338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:36:28.897345 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:36:28.897352 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:36:28.897362 kernel: TSC deadline timer available Sep 4 17:36:28.897369 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:36:28.897376 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:36:28.897383 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:36:28.897390 kernel: kvm-guest: setup PV sched yield Sep 4 17:36:28.897397 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Sep 4 17:36:28.897404 kernel: Booting paravirtualized kernel on KVM Sep 4 17:36:28.897412 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:36:28.897419 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:36:28.897429 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:36:28.897436 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:36:28.897443 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:36:28.897450 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:36:28.897457 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:36:28.897465 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:36:28.897473 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:36:28.897480 kernel: random: crng init done Sep 4 17:36:28.897490 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:36:28.897497 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:36:28.897504 kernel: Fallback order for Node 0: 0 Sep 4 17:36:28.897512 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 4 17:36:28.897519 kernel: Policy zone: DMA32 Sep 4 17:36:28.897526 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:36:28.897533 kernel: Memory: 2399076K/2567000K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 167664K reserved, 0K cma-reserved) Sep 4 17:36:28.897541 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:36:28.897548 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:36:28.897559 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:36:28.897567 kernel: Dynamic Preempt: voluntary Sep 4 17:36:28.897574 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:36:28.897582 kernel: rcu: RCU event tracing is enabled. Sep 4 17:36:28.897590 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:36:28.897606 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:36:28.897614 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:36:28.897621 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:36:28.897629 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:36:28.897636 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:36:28.897643 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:36:28.897651 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:36:28.897660 kernel: Console: colour dummy device 80x25 Sep 4 17:36:28.897668 kernel: printk: console [ttyS0] enabled Sep 4 17:36:28.897675 kernel: ACPI: Core revision 20230628 Sep 4 17:36:28.897683 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:36:28.897691 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:36:28.897703 kernel: x2apic enabled Sep 4 17:36:28.897711 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:36:28.897718 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:36:28.897726 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:36:28.897733 kernel: kvm-guest: setup PV IPIs Sep 4 17:36:28.897741 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:36:28.897748 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:36:28.897756 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Sep 4 17:36:28.897763 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:36:28.897773 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:36:28.897780 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:36:28.897788 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:36:28.897796 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:36:28.897803 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:36:28.897811 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:36:28.897830 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:36:28.897838 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:36:28.897845 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:36:28.897865 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:36:28.897882 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:36:28.897925 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:36:28.897949 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:36:28.897965 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:36:28.897982 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:36:28.897998 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:36:28.898027 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:36:28.898048 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:36:28.898072 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:36:28.898080 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:36:28.898088 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:36:28.898098 kernel: landlock: Up and running. Sep 4 17:36:28.898106 kernel: SELinux: Initializing. Sep 4 17:36:28.898113 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:36:28.898121 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:36:28.898128 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:36:28.898139 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:36:28.898147 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:36:28.898155 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:36:28.898162 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:36:28.898170 kernel: ... version: 0 Sep 4 17:36:28.898177 kernel: ... bit width: 48 Sep 4 17:36:28.898184 kernel: ... generic registers: 6 Sep 4 17:36:28.898192 kernel: ... value mask: 0000ffffffffffff Sep 4 17:36:28.898199 kernel: ... max period: 00007fffffffffff Sep 4 17:36:28.898209 kernel: ... fixed-purpose events: 0 Sep 4 17:36:28.898217 kernel: ... event mask: 000000000000003f Sep 4 17:36:28.898224 kernel: signal: max sigframe size: 1776 Sep 4 17:36:28.898232 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:36:28.898240 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:36:28.898247 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:36:28.898255 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:36:28.898262 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:36:28.898270 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:36:28.898277 kernel: smpboot: Max logical packages: 1 Sep 4 17:36:28.898287 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Sep 4 17:36:28.898294 kernel: devtmpfs: initialized Sep 4 17:36:28.898302 kernel: x86/mm: Memory block size: 128MB Sep 4 17:36:28.898310 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 17:36:28.898317 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 17:36:28.898327 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 4 17:36:28.898335 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 17:36:28.898343 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 17:36:28.898353 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:36:28.898361 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:36:28.898368 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:36:28.898376 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:36:28.898383 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:36:28.898391 kernel: audit: type=2000 audit(1725471388.149:1): state=initialized audit_enabled=0 res=1 Sep 4 17:36:28.898398 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:36:28.898405 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:36:28.898413 kernel: cpuidle: using governor menu Sep 4 17:36:28.898423 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:36:28.898430 kernel: dca service started, version 1.12.1 Sep 4 17:36:28.898460 kernel: PCI: Using configuration type 1 for base access Sep 4 17:36:28.898468 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:36:28.898476 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:36:28.898486 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:36:28.898494 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:36:28.898502 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:36:28.898509 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:36:28.898520 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:36:28.898527 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:36:28.898534 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:36:28.898542 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:36:28.898550 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:36:28.898557 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:36:28.898564 kernel: ACPI: Interpreter enabled Sep 4 17:36:28.898572 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:36:28.898579 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:36:28.898587 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:36:28.898597 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:36:28.898604 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:36:28.898612 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:36:28.898801 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:36:28.898814 kernel: acpiphp: Slot [3] registered Sep 4 17:36:28.898822 kernel: acpiphp: Slot [4] registered Sep 4 17:36:28.898829 kernel: acpiphp: Slot [5] registered Sep 4 17:36:28.898840 kernel: acpiphp: Slot [6] registered Sep 4 17:36:28.898848 kernel: acpiphp: Slot [7] registered Sep 4 17:36:28.898855 kernel: acpiphp: Slot [8] registered Sep 4 17:36:28.898863 kernel: acpiphp: Slot [9] registered Sep 4 17:36:28.898870 kernel: acpiphp: Slot [10] registered Sep 4 17:36:28.898878 kernel: acpiphp: Slot [11] registered Sep 4 17:36:28.898885 kernel: acpiphp: Slot [12] registered Sep 4 17:36:28.898893 kernel: acpiphp: Slot [13] registered Sep 4 17:36:28.898913 kernel: acpiphp: Slot [14] registered Sep 4 17:36:28.898921 kernel: acpiphp: Slot [15] registered Sep 4 17:36:28.898931 kernel: acpiphp: Slot [16] registered Sep 4 17:36:28.898938 kernel: acpiphp: Slot [17] registered Sep 4 17:36:28.898946 kernel: acpiphp: Slot [18] registered Sep 4 17:36:28.898953 kernel: acpiphp: Slot [19] registered Sep 4 17:36:28.898961 kernel: acpiphp: Slot [20] registered Sep 4 17:36:28.898968 kernel: acpiphp: Slot [21] registered Sep 4 17:36:28.898975 kernel: acpiphp: Slot [22] registered Sep 4 17:36:28.898983 kernel: acpiphp: Slot [23] registered Sep 4 17:36:28.898990 kernel: acpiphp: Slot [24] registered Sep 4 17:36:28.899000 kernel: acpiphp: Slot [25] registered Sep 4 17:36:28.899007 kernel: acpiphp: Slot [26] registered Sep 4 17:36:28.899020 kernel: acpiphp: Slot [27] registered Sep 4 17:36:28.899028 kernel: acpiphp: Slot [28] registered Sep 4 17:36:28.899036 kernel: acpiphp: Slot [29] registered Sep 4 17:36:28.899043 kernel: acpiphp: Slot [30] registered Sep 4 17:36:28.899051 kernel: acpiphp: Slot [31] registered Sep 4 17:36:28.899058 kernel: PCI host bridge to bus 0000:00 Sep 4 17:36:28.899203 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:36:28.899326 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:36:28.899493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:36:28.899610 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:36:28.899725 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Sep 4 17:36:28.899841 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:36:28.900032 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:36:28.900195 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:36:28.900337 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:36:28.900470 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:36:28.900633 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:36:28.900761 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:36:28.900964 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:36:28.901108 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:36:28.901252 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:36:28.901391 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:36:28.901522 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:36:28.901655 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:36:28.901780 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 4 17:36:28.901922 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Sep 4 17:36:28.902074 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 17:36:28.902207 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Sep 4 17:36:28.902384 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:36:28.902523 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:36:28.902650 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:36:28.902775 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 4 17:36:28.902914 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 4 17:36:28.903060 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:36:28.903194 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:36:28.903319 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 4 17:36:28.903450 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 4 17:36:28.903584 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:36:28.903713 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:36:28.903838 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Sep 4 17:36:28.904007 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 4 17:36:28.904152 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 4 17:36:28.904184 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:36:28.904201 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:36:28.904209 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:36:28.904217 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:36:28.904225 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:36:28.904232 kernel: iommu: Default domain type: Translated Sep 4 17:36:28.904240 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:36:28.904252 kernel: efivars: Registered efivars operations Sep 4 17:36:28.904264 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:36:28.904271 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:36:28.904279 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 17:36:28.904286 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 4 17:36:28.904294 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 4 17:36:28.904301 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 4 17:36:28.904432 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:36:28.904558 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:36:28.904687 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:36:28.904698 kernel: vgaarb: loaded Sep 4 17:36:28.904706 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:36:28.904713 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:36:28.904721 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:36:28.904729 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:36:28.904736 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:36:28.904744 kernel: pnp: PnP ACPI init Sep 4 17:36:28.904983 kernel: pnp 00:02: [dma 2] Sep 4 17:36:28.905002 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:36:28.905010 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:36:28.905024 kernel: NET: Registered PF_INET protocol family Sep 4 17:36:28.905032 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:36:28.905040 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:36:28.905048 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:36:28.905055 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:36:28.905063 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:36:28.905073 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:36:28.905081 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:36:28.905088 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:36:28.905096 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:36:28.905103 kernel: NET: Registered PF_XDP protocol family Sep 4 17:36:28.905236 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 4 17:36:28.905363 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 4 17:36:28.905480 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:36:28.905599 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:36:28.905715 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:36:28.905829 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:36:28.905960 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Sep 4 17:36:28.906095 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:36:28.906221 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:36:28.906232 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:36:28.906239 kernel: Initialise system trusted keyrings Sep 4 17:36:28.906251 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:36:28.906259 kernel: Key type asymmetric registered Sep 4 17:36:28.906266 kernel: Asymmetric key parser 'x509' registered Sep 4 17:36:28.906274 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:36:28.906281 kernel: io scheduler mq-deadline registered Sep 4 17:36:28.906289 kernel: io scheduler kyber registered Sep 4 17:36:28.906297 kernel: io scheduler bfq registered Sep 4 17:36:28.906304 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:36:28.906312 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:36:28.906322 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:36:28.906330 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:36:28.906338 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:36:28.906346 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:36:28.906369 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:36:28.906379 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:36:28.906387 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:36:28.906517 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:36:28.906529 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:36:28.906651 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:36:28.906771 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:36:28 UTC (1725471388) Sep 4 17:36:28.906890 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:36:28.906925 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:36:28.906937 kernel: efifb: probing for efifb Sep 4 17:36:28.906946 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 4 17:36:28.906954 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 4 17:36:28.906961 kernel: efifb: scrolling: redraw Sep 4 17:36:28.906972 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 4 17:36:28.906979 kernel: Console: switching to colour frame buffer device 100x37 Sep 4 17:36:28.906987 kernel: fb0: EFI VGA frame buffer device Sep 4 17:36:28.906995 kernel: pstore: Using crash dump compression: deflate Sep 4 17:36:28.907003 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:36:28.907011 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:36:28.907026 kernel: Segment Routing with IPv6 Sep 4 17:36:28.907034 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:36:28.907042 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:36:28.907052 kernel: Key type dns_resolver registered Sep 4 17:36:28.907062 kernel: IPI shorthand broadcast: enabled Sep 4 17:36:28.907070 kernel: sched_clock: Marking stable (962002492, 112593515)->(1135219193, -60623186) Sep 4 17:36:28.907078 kernel: registered taskstats version 1 Sep 4 17:36:28.907086 kernel: Loading compiled-in X.509 certificates Sep 4 17:36:28.907118 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:36:28.907129 kernel: Key type .fscrypt registered Sep 4 17:36:28.907137 kernel: Key type fscrypt-provisioning registered Sep 4 17:36:28.907145 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:36:28.907153 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:36:28.907160 kernel: ima: No architecture policies found Sep 4 17:36:28.907168 kernel: clk: Disabling unused clocks Sep 4 17:36:28.907176 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:36:28.907184 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:36:28.907194 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:36:28.907202 kernel: Run /init as init process Sep 4 17:36:28.907210 kernel: with arguments: Sep 4 17:36:28.907218 kernel: /init Sep 4 17:36:28.907225 kernel: with environment: Sep 4 17:36:28.907233 kernel: HOME=/ Sep 4 17:36:28.907241 kernel: TERM=linux Sep 4 17:36:28.907249 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:36:28.907259 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:36:28.907271 systemd[1]: Detected virtualization kvm. Sep 4 17:36:28.907280 systemd[1]: Detected architecture x86-64. Sep 4 17:36:28.907288 systemd[1]: Running in initrd. Sep 4 17:36:28.907296 systemd[1]: No hostname configured, using default hostname. Sep 4 17:36:28.907304 systemd[1]: Hostname set to . Sep 4 17:36:28.907312 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:36:28.907321 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:36:28.907331 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:36:28.907340 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:36:28.907349 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:36:28.907357 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:36:28.907366 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:36:28.907375 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:36:28.907385 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:36:28.907396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:36:28.907404 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:36:28.907413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:36:28.907421 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:36:28.907429 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:36:28.907438 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:36:28.907446 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:36:28.907454 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:36:28.907465 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:36:28.907474 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:36:28.907482 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:36:28.907490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:36:28.907499 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:36:28.907507 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:36:28.907515 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:36:28.907524 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:36:28.907532 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:36:28.907545 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:36:28.907553 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:36:28.907561 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:36:28.907570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:36:28.907578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:28.907586 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:36:28.907594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:36:28.907603 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:36:28.907633 systemd-journald[191]: Collecting audit messages is disabled. Sep 4 17:36:28.907654 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:36:28.907663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:28.907672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:36:28.907681 systemd-journald[191]: Journal started Sep 4 17:36:28.907709 systemd-journald[191]: Runtime Journal (/run/log/journal/8546b5678e9d4a7f9fc078ef3364b879) is 6.0M, max 48.3M, 42.2M free. Sep 4 17:36:28.907746 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:36:28.900926 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 17:36:28.910294 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:36:28.913885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:36:28.916066 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:36:28.926979 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:36:28.930114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:36:28.932937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:36:28.936862 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:36:28.938054 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 17:36:28.939000 kernel: Bridge firewalling registered Sep 4 17:36:28.952092 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:36:28.953380 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:36:28.956983 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:36:28.963990 dracut-cmdline[223]: dracut-dracut-053 Sep 4 17:36:28.976890 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:36:28.989216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:36:28.998103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:36:29.032244 systemd-resolved[264]: Positive Trust Anchors: Sep 4 17:36:29.032258 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:36:29.032298 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:36:29.035169 systemd-resolved[264]: Defaulting to hostname 'linux'. Sep 4 17:36:29.036485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:36:29.042338 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:36:29.072940 kernel: SCSI subsystem initialized Sep 4 17:36:29.082929 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:36:29.093936 kernel: iscsi: registered transport (tcp) Sep 4 17:36:29.115932 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:36:29.115967 kernel: QLogic iSCSI HBA Driver Sep 4 17:36:29.170131 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:36:29.183061 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:36:29.208628 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:36:29.208672 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:36:29.208685 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:36:29.250928 kernel: raid6: avx2x4 gen() 30417 MB/s Sep 4 17:36:29.267921 kernel: raid6: avx2x2 gen() 30858 MB/s Sep 4 17:36:29.284993 kernel: raid6: avx2x1 gen() 26078 MB/s Sep 4 17:36:29.285015 kernel: raid6: using algorithm avx2x2 gen() 30858 MB/s Sep 4 17:36:29.303011 kernel: raid6: .... xor() 19995 MB/s, rmw enabled Sep 4 17:36:29.303026 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:36:29.322927 kernel: xor: automatically using best checksumming function avx Sep 4 17:36:29.472931 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:36:29.488335 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:36:29.498082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:36:29.512785 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 4 17:36:29.518570 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:36:29.526107 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:36:29.540391 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Sep 4 17:36:29.576488 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:36:29.584110 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:36:29.653533 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:36:29.665039 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:36:29.675518 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:36:29.678510 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:36:29.681043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:36:29.683678 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:36:29.695924 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:36:29.696133 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:36:29.693132 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:36:29.703513 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:36:29.710526 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:36:29.716114 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:36:29.716146 kernel: GPT:9289727 != 19775487 Sep 4 17:36:29.716167 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:36:29.716204 kernel: GPT:9289727 != 19775487 Sep 4 17:36:29.716223 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:36:29.716245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:36:29.726946 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:36:29.727014 kernel: AES CTR mode by8 optimization enabled Sep 4 17:36:29.729722 kernel: libata version 3.00 loaded. Sep 4 17:36:29.729760 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:36:29.730133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:36:29.734444 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:36:29.732026 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:36:29.736989 kernel: scsi host0: ata_piix Sep 4 17:36:29.738110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:36:29.740943 kernel: scsi host1: ata_piix Sep 4 17:36:29.741140 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:36:29.741154 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:36:29.738660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:29.744598 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:29.756103 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (473) Sep 4 17:36:29.756126 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Sep 4 17:36:29.760185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:29.775527 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:36:29.777100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:29.791627 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:36:29.791730 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:36:29.799373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:36:29.803921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:36:29.818130 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:36:29.819092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:36:29.830465 disk-uuid[541]: Primary Header is updated. Sep 4 17:36:29.830465 disk-uuid[541]: Secondary Entries is updated. Sep 4 17:36:29.830465 disk-uuid[541]: Secondary Header is updated. Sep 4 17:36:29.835934 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:36:29.839928 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:36:29.841505 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:36:29.905009 kernel: ata2: found unknown device (class 0) Sep 4 17:36:29.905946 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:36:29.907953 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:36:29.959966 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:36:29.960210 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:36:29.974040 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:36:30.841944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:36:30.842017 disk-uuid[544]: The operation has completed successfully. Sep 4 17:36:30.871303 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:36:30.871436 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:36:30.896200 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:36:30.899766 sh[579]: Success Sep 4 17:36:30.912932 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:36:30.946116 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:36:30.963464 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:36:30.966768 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:36:30.978739 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:36:30.978768 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:36:30.978787 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:36:30.980524 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:36:30.980539 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:36:30.985317 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:36:30.986852 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:36:30.994045 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:36:30.996614 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:36:31.005375 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:31.005400 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:36:31.005411 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:36:31.008930 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:36:31.018444 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:36:31.020211 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:31.028800 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:36:31.036066 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:36:31.098744 ignition[667]: Ignition 2.19.0 Sep 4 17:36:31.099195 ignition[667]: Stage: fetch-offline Sep 4 17:36:31.099243 ignition[667]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:31.099253 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:36:31.099350 ignition[667]: parsed url from cmdline: "" Sep 4 17:36:31.099355 ignition[667]: no config URL provided Sep 4 17:36:31.099360 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:36:31.099370 ignition[667]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:36:31.099397 ignition[667]: op(1): [started] loading QEMU firmware config module Sep 4 17:36:31.099403 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:36:31.109671 ignition[667]: op(1): [finished] loading QEMU firmware config module Sep 4 17:36:31.128579 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:36:31.140100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:36:31.152470 ignition[667]: parsing config with SHA512: e0bee5363af76e529fb2be8c6cbb10265fa49534f3a8a46cdb9e0e4532ea44ab11d6f8f8a14e1e99b49f752fe8a94b1fa24ccfcc1d26fdb6170671fe4706aa6b Sep 4 17:36:31.156493 unknown[667]: fetched base config from "system" Sep 4 17:36:31.156507 unknown[667]: fetched user config from "qemu" Sep 4 17:36:31.156927 ignition[667]: fetch-offline: fetch-offline passed Sep 4 17:36:31.157002 ignition[667]: Ignition finished successfully Sep 4 17:36:31.159488 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:36:31.161357 systemd-networkd[768]: lo: Link UP Sep 4 17:36:31.161369 systemd-networkd[768]: lo: Gained carrier Sep 4 17:36:31.162989 systemd-networkd[768]: Enumeration completed Sep 4 17:36:31.163385 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:31.163389 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:36:31.165069 systemd-networkd[768]: eth0: Link UP Sep 4 17:36:31.165073 systemd-networkd[768]: eth0: Gained carrier Sep 4 17:36:31.165080 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:31.167049 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:36:31.173690 systemd[1]: Reached target network.target - Network. Sep 4 17:36:31.175475 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:36:31.188114 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:36:31.191946 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:36:31.202084 ignition[772]: Ignition 2.19.0 Sep 4 17:36:31.202095 ignition[772]: Stage: kargs Sep 4 17:36:31.202264 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:31.202277 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:36:31.203147 ignition[772]: kargs: kargs passed Sep 4 17:36:31.203191 ignition[772]: Ignition finished successfully Sep 4 17:36:31.209991 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:36:31.219083 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:36:31.232845 ignition[782]: Ignition 2.19.0 Sep 4 17:36:31.232858 ignition[782]: Stage: disks Sep 4 17:36:31.233059 ignition[782]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:31.233070 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:36:31.234049 ignition[782]: disks: disks passed Sep 4 17:36:31.236446 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:36:31.234096 ignition[782]: Ignition finished successfully Sep 4 17:36:31.238415 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:36:31.240239 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:36:31.241502 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:36:31.243235 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:36:31.245347 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:36:31.265129 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:36:31.277386 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:36:31.284407 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:36:31.293029 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:36:31.380021 kernel: EXT4-fs (vda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:36:31.380409 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:36:31.382708 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:36:31.393017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:36:31.395615 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:36:31.398042 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:36:31.403425 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Sep 4 17:36:31.403450 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:31.403462 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:36:31.403473 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:36:31.398091 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:36:31.402831 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:36:31.405927 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:36:31.410577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:36:31.415329 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:36:31.417250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:36:31.455418 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:36:31.460871 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:36:31.466009 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:36:31.470996 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:36:31.561031 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:36:31.573007 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:36:31.575597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:36:31.583957 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:31.599895 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:36:31.608734 ignition[915]: INFO : Ignition 2.19.0 Sep 4 17:36:31.608734 ignition[915]: INFO : Stage: mount Sep 4 17:36:31.610427 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:31.610427 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:36:31.610427 ignition[915]: INFO : mount: mount passed Sep 4 17:36:31.610427 ignition[915]: INFO : Ignition finished successfully Sep 4 17:36:31.612175 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:36:31.625998 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:36:31.978151 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:36:31.987206 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:36:31.994397 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Sep 4 17:36:31.994425 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:36:31.994437 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:36:31.995925 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:36:31.998927 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:36:31.999838 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:36:32.031452 ignition[944]: INFO : Ignition 2.19.0 Sep 4 17:36:32.031452 ignition[944]: INFO : Stage: files Sep 4 17:36:32.033210 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:32.033210 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:36:32.033210 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:36:32.036754 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:36:32.036754 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:36:32.036754 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:36:32.036754 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:36:32.036754 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:36:32.036033 unknown[944]: wrote ssh authorized keys file for user: core Sep 4 17:36:32.044996 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:36:32.044996 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:36:32.044996 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:36:32.044996 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:36:32.100994 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:36:32.239506 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:36:32.241509 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:36:32.241509 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 17:36:32.639111 systemd-networkd[768]: eth0: Gained IPv6LL Sep 4 17:36:32.719001 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 4 17:36:32.867634 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:36:32.867634 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:36:32.871450 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:36:33.144284 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 4 17:36:33.801432 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:36:33.801432 ignition[944]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 4 17:36:33.805179 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:36:33.807786 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:36:33.807786 ignition[944]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 4 17:36:33.807786 ignition[944]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 4 17:36:33.812382 ignition[944]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:36:33.814273 ignition[944]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:36:33.814273 ignition[944]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 4 17:36:33.814273 ignition[944]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 4 17:36:33.818556 ignition[944]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:36:33.820542 ignition[944]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:36:33.820542 ignition[944]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 4 17:36:33.823688 ignition[944]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:36:33.850779 ignition[944]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:36:33.859288 ignition[944]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:36:33.860979 ignition[944]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:36:33.860979 ignition[944]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:36:33.860979 ignition[944]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:36:33.860979 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:36:33.860979 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:36:33.860979 ignition[944]: INFO : files: files passed Sep 4 17:36:33.860979 ignition[944]: INFO : Ignition finished successfully Sep 4 17:36:33.872714 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:36:33.885105 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:36:33.887103 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:36:33.888916 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:36:33.889071 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:36:33.901103 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:36:33.904145 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:36:33.904145 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:36:33.907449 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:36:33.911196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:36:33.913108 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:36:33.924100 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:36:33.951287 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:36:33.951416 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:36:33.953728 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:36:33.955812 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:36:33.957793 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:36:33.972134 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:36:33.988830 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:36:34.002050 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:36:34.011159 systemd[1]: Stopped target network.target - Network. Sep 4 17:36:34.012164 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:36:34.014109 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:36:34.016421 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:36:34.018446 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:36:34.018562 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:36:34.020936 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:36:34.022501 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:36:34.024559 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:36:34.026616 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:36:34.028649 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:36:34.030853 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:36:34.033033 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:36:34.035334 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:36:34.037363 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:36:34.039585 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:36:34.041383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:36:34.041499 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:36:34.043840 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:36:34.045589 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:36:34.047505 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:36:34.047657 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:36:34.049716 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:36:34.049962 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:36:34.052245 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:36:34.052419 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:36:34.054262 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:36:34.055991 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:36:34.060022 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:36:34.061717 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:36:34.063653 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:36:34.065461 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:36:34.065573 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:36:34.067503 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:36:34.067603 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:36:34.070037 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:36:34.070164 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:36:34.072101 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:36:34.072211 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:36:34.082054 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:36:34.083965 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:36:34.084091 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:36:34.087152 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:36:34.088652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:36:34.090722 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:36:34.092402 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:36:34.092631 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:36:34.094837 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:36:34.095071 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:36:34.098965 systemd-networkd[768]: eth0: DHCPv6 lease lost Sep 4 17:36:34.101796 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:36:34.102980 ignition[999]: INFO : Ignition 2.19.0 Sep 4 17:36:34.102980 ignition[999]: INFO : Stage: umount Sep 4 17:36:34.102980 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:36:34.102980 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:36:34.101973 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:36:34.111672 ignition[999]: INFO : umount: umount passed Sep 4 17:36:34.111672 ignition[999]: INFO : Ignition finished successfully Sep 4 17:36:34.106318 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:36:34.106444 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:36:34.110296 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:36:34.110483 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:36:34.112148 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:36:34.112297 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:36:34.114607 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:36:34.118451 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:36:34.118531 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:36:34.120067 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:36:34.120121 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:36:34.122152 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:36:34.122229 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:36:34.124284 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:36:34.124335 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:36:34.126367 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:36:34.126431 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:36:34.138075 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:36:34.139292 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:36:34.139351 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:36:34.141809 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:36:34.141871 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:36:34.144263 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:36:34.144338 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:36:34.146956 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:36:34.147027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:36:34.149422 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:36:34.165209 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:36:34.165384 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:36:34.169911 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:36:34.170111 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:36:34.172747 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:36:34.172843 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:36:34.174842 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:36:34.174918 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:36:34.176977 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:36:34.177033 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:36:34.179315 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:36:34.179365 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:36:34.181357 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:36:34.181407 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:36:34.201096 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:36:34.201191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:36:34.201256 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:36:34.201581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:36:34.201630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:34.212332 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:36:34.212503 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:36:34.321283 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:36:34.321474 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:36:34.322837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:36:34.324244 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:36:34.324312 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:36:34.338105 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:36:34.346644 systemd[1]: Switching root. Sep 4 17:36:34.371929 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Sep 4 17:36:34.371966 systemd-journald[191]: Journal stopped Sep 4 17:36:35.639942 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:36:35.640016 kernel: SELinux: policy capability open_perms=1 Sep 4 17:36:35.640035 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:36:35.640049 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:36:35.640060 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:36:35.640071 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:36:35.640083 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:36:35.640099 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:36:35.640111 kernel: audit: type=1403 audit(1725471394.865:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:36:35.640131 systemd[1]: Successfully loaded SELinux policy in 43.410ms. Sep 4 17:36:35.640157 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.172ms. Sep 4 17:36:35.640170 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:36:35.640183 systemd[1]: Detected virtualization kvm. Sep 4 17:36:35.640195 systemd[1]: Detected architecture x86-64. Sep 4 17:36:35.640207 systemd[1]: Detected first boot. Sep 4 17:36:35.640219 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:36:35.640234 zram_generator::config[1060]: No configuration found. Sep 4 17:36:35.640253 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:36:35.640265 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:36:35.640277 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:36:35.640290 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:36:35.640302 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:36:35.640314 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:36:35.640327 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:36:35.640343 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:36:35.640356 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:36:35.640368 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:36:35.640380 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:36:35.640392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:36:35.640404 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:36:35.640417 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:36:35.640429 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:36:35.640441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:36:35.640457 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:36:35.640469 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:36:35.640481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:36:35.640494 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:36:35.640506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:36:35.640518 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:36:35.640530 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:36:35.640542 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:36:35.640557 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:36:35.640569 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:36:35.640582 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:36:35.640594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:36:35.640607 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:36:35.640620 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:36:35.640632 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:36:35.640644 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:36:35.640657 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:36:35.640672 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:36:35.640684 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:36:35.640696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:35.640709 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:36:35.640721 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:36:35.640733 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:36:35.640745 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:36:35.640757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:36:35.640769 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:36:35.640784 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:36:35.640796 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:36:35.640809 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:36:35.640821 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:36:35.640838 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:36:35.640858 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:36:35.640872 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:36:35.640886 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 4 17:36:35.641963 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 4 17:36:35.642006 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:36:35.642021 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:36:35.642034 kernel: loop: module loaded Sep 4 17:36:35.642048 kernel: fuse: init (API version 7.39) Sep 4 17:36:35.642061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:36:35.642073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:36:35.642086 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:36:35.642137 systemd-journald[1144]: Collecting audit messages is disabled. Sep 4 17:36:35.642172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:35.642185 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:36:35.642197 kernel: ACPI: bus type drm_connector registered Sep 4 17:36:35.642209 systemd-journald[1144]: Journal started Sep 4 17:36:35.642231 systemd-journald[1144]: Runtime Journal (/run/log/journal/8546b5678e9d4a7f9fc078ef3364b879) is 6.0M, max 48.3M, 42.2M free. Sep 4 17:36:35.648019 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:36:35.646876 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:36:35.648449 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:36:35.649676 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:36:35.651068 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:36:35.652327 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:36:35.653796 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:36:35.655777 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:36:35.657383 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:36:35.657604 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:36:35.659170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:36:35.659396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:36:35.660894 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:36:35.661120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:36:35.662548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:36:35.662763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:36:35.664341 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:36:35.664557 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:36:35.666065 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:36:35.666277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:36:35.667780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:36:35.669386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:36:35.671373 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:36:35.686801 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:36:35.695023 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:36:35.697413 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:36:35.698544 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:36:35.703196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:36:35.707626 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:36:35.708866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:36:35.712251 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:36:35.713567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:36:35.716691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:36:35.717838 systemd-journald[1144]: Time spent on flushing to /var/log/journal/8546b5678e9d4a7f9fc078ef3364b879 is 18.342ms for 979 entries. Sep 4 17:36:35.717838 systemd-journald[1144]: System Journal (/var/log/journal/8546b5678e9d4a7f9fc078ef3364b879) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:36:35.779023 systemd-journald[1144]: Received client request to flush runtime journal. Sep 4 17:36:35.728200 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:36:35.734256 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:36:35.735798 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:36:35.737253 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:36:35.738977 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:36:35.743169 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:36:35.762472 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Sep 4 17:36:35.762488 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Sep 4 17:36:35.774154 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:36:35.775765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:36:35.777387 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:36:35.781806 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:36:35.792117 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:36:35.795195 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:36:35.819966 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:36:35.832055 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:36:35.848853 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 4 17:36:35.848875 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 4 17:36:35.855188 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:36:36.535796 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:36:36.549046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:36:36.576836 systemd-udevd[1228]: Using default interface naming scheme 'v255'. Sep 4 17:36:36.594369 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:36:36.605119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:36:36.612057 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:36:36.676234 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 4 17:36:36.687236 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1230) Sep 4 17:36:36.689924 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1230) Sep 4 17:36:36.717073 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:36:36.753591 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 17:36:36.753645 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1238) Sep 4 17:36:36.764063 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:36:36.809922 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:36:36.834649 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Sep 4 17:36:36.842801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:36:36.876763 systemd-networkd[1233]: lo: Link UP Sep 4 17:36:36.876780 systemd-networkd[1233]: lo: Gained carrier Sep 4 17:36:36.878244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:36.879523 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:36:36.881630 systemd-networkd[1233]: Enumeration completed Sep 4 17:36:36.882804 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:36.882827 systemd-networkd[1233]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:36:36.883568 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:36:36.911355 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:36:36.914646 systemd-networkd[1233]: eth0: Link UP Sep 4 17:36:36.915121 systemd-networkd[1233]: eth0: Gained carrier Sep 4 17:36:36.915168 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:36:36.943748 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:36:36.944129 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:36.947374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:36:36.990338 systemd-networkd[1233]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:36:37.001242 kernel: kvm_amd: TSC scaling supported Sep 4 17:36:37.001318 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:36:37.001337 kernel: kvm_amd: Nested Paging enabled Sep 4 17:36:37.002478 kernel: kvm_amd: LBR virtualization supported Sep 4 17:36:37.002501 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:36:37.003165 kernel: kvm_amd: Virtual GIF supported Sep 4 17:36:37.024978 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:36:37.033686 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:36:37.053534 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:36:37.068182 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:36:37.078611 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:36:37.109225 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:36:37.110815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:36:37.121046 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:36:37.126237 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:36:37.165542 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:36:37.167137 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:36:37.168435 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:36:37.168469 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:36:37.169543 systemd[1]: Reached target machines.target - Containers. Sep 4 17:36:37.172092 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:36:37.184087 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:36:37.186988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:36:37.188199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:36:37.189415 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:36:37.192133 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:36:37.196996 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:36:37.199340 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:36:37.211458 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:36:37.212922 kernel: loop0: detected capacity change from 0 to 209816 Sep 4 17:36:37.222118 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:36:37.223152 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:36:37.234926 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:36:37.260930 kernel: loop1: detected capacity change from 0 to 89336 Sep 4 17:36:37.300070 kernel: loop2: detected capacity change from 0 to 140728 Sep 4 17:36:37.372936 kernel: loop3: detected capacity change from 0 to 209816 Sep 4 17:36:37.386939 kernel: loop4: detected capacity change from 0 to 89336 Sep 4 17:36:37.394941 kernel: loop5: detected capacity change from 0 to 140728 Sep 4 17:36:37.405736 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:36:37.406357 (sd-merge)[1307]: Merged extensions into '/usr'. Sep 4 17:36:37.412405 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:36:37.412437 systemd[1]: Reloading... Sep 4 17:36:37.511937 zram_generator::config[1333]: No configuration found. Sep 4 17:36:37.589836 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:36:37.660082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:36:37.732956 systemd[1]: Reloading finished in 319 ms. Sep 4 17:36:37.753362 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:36:37.755058 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:36:37.769234 systemd[1]: Starting ensure-sysext.service... Sep 4 17:36:37.771615 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:36:37.777851 systemd[1]: Reloading requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:36:37.777872 systemd[1]: Reloading... Sep 4 17:36:37.839691 zram_generator::config[1413]: No configuration found. Sep 4 17:36:37.841408 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:36:37.841926 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:36:37.843267 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:36:37.843717 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Sep 4 17:36:37.843854 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Sep 4 17:36:37.848142 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:36:37.848160 systemd-tmpfiles[1384]: Skipping /boot Sep 4 17:36:37.865397 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:36:37.865421 systemd-tmpfiles[1384]: Skipping /boot Sep 4 17:36:37.964801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:36:38.029687 systemd[1]: Reloading finished in 251 ms. Sep 4 17:36:38.049877 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:36:38.067989 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:36:38.071325 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:36:38.074483 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:36:38.080000 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:36:38.084517 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:36:38.092024 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:38.092305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:36:38.095999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:36:38.100850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:36:38.107165 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:36:38.108594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:36:38.108735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:38.110051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:36:38.110363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:36:38.112375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:36:38.112805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:36:38.115814 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:36:38.121702 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:36:38.122113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:36:38.130161 augenrules[1486]: No rules Sep 4 17:36:38.132483 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:36:38.136173 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:38.136731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:36:38.138723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:36:38.142217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:36:38.151023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:36:38.152293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:36:38.156208 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:36:38.157667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:38.159476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:36:38.159705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:36:38.162415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:36:38.162668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:36:38.164846 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:36:38.165524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:36:38.172484 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:36:38.175180 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:36:38.177527 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:36:38.185273 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:38.185558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:36:38.196139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:36:38.198388 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:36:38.201130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:36:38.206322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:36:38.207647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:36:38.207856 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:36:38.208015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:36:38.210215 systemd-resolved[1459]: Positive Trust Anchors: Sep 4 17:36:38.210508 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:36:38.210582 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:36:38.211061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:36:38.211342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:36:38.213308 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:36:38.213536 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:36:38.215142 systemd-resolved[1459]: Defaulting to hostname 'linux'. Sep 4 17:36:38.215276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:36:38.215508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:36:38.217544 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:36:38.217841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:36:38.219308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:36:38.222865 systemd[1]: Finished ensure-sysext.service. Sep 4 17:36:38.228029 systemd[1]: Reached target network.target - Network. Sep 4 17:36:38.228979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:36:38.230197 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:36:38.230264 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:36:38.239079 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:36:38.301595 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:36:38.302994 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:36:38.304177 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:36:38.926547 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:36:38.926586 systemd-resolved[1459]: Clock change detected. Flushing caches. Sep 4 17:36:38.927822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:36:38.927828 systemd-timesyncd[1529]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:36:38.929085 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:36:38.929093 systemd-timesyncd[1529]: Initial clock synchronization to Wed 2024-09-04 17:36:38.926511 UTC. Sep 4 17:36:38.929111 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:36:38.930042 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:36:38.931218 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:36:38.932463 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:36:38.933744 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:36:38.935390 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:36:38.938463 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:36:38.940978 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:36:38.948128 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:36:38.949249 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:36:38.950232 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:36:38.951329 systemd[1]: System is tainted: cgroupsv1 Sep 4 17:36:38.951369 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:36:38.951394 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:36:38.952801 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:36:38.955342 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:36:38.957619 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:36:38.960923 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:36:38.963058 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:36:38.966576 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:36:38.968983 jq[1535]: false Sep 4 17:36:38.972273 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:36:38.977397 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:36:38.983546 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:36:38.984823 extend-filesystems[1536]: Found loop3 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found loop4 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found loop5 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found sr0 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found vda Sep 4 17:36:38.984823 extend-filesystems[1536]: Found vda1 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found vda2 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found vda3 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found usr Sep 4 17:36:38.984823 extend-filesystems[1536]: Found vda4 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found vda6 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found vda7 Sep 4 17:36:38.984823 extend-filesystems[1536]: Found vda9 Sep 4 17:36:38.984823 extend-filesystems[1536]: Checking size of /dev/vda9 Sep 4 17:36:39.008949 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:36:38.984487 dbus-daemon[1534]: [system] SELinux support is enabled Sep 4 17:36:38.991739 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:36:39.009445 extend-filesystems[1536]: Resized partition /dev/vda9 Sep 4 17:36:38.993261 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:36:39.011997 extend-filesystems[1555]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:36:38.999756 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:36:39.008294 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:36:39.012345 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:36:39.024508 update_engine[1557]: I0904 17:36:39.024130 1557 main.cc:92] Flatcar Update Engine starting Sep 4 17:36:39.022826 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:36:39.023166 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:36:39.023528 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:36:39.023851 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:36:39.025463 jq[1560]: true Sep 4 17:36:39.032993 update_engine[1557]: I0904 17:36:39.032714 1557 update_check_scheduler.cc:74] Next update check in 4m34s Sep 4 17:36:39.042420 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1236) Sep 4 17:36:39.039637 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:36:39.039988 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:36:39.048808 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:36:39.071853 extend-filesystems[1555]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:36:39.071853 extend-filesystems[1555]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:36:39.071853 extend-filesystems[1555]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:36:39.085539 extend-filesystems[1536]: Resized filesystem in /dev/vda9 Sep 4 17:36:39.075354 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:36:39.089924 jq[1567]: true Sep 4 17:36:39.075717 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:36:39.079397 (ntainerd)[1568]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:36:39.084988 systemd-networkd[1233]: eth0: Gained IPv6LL Sep 4 17:36:39.106100 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:36:39.116777 tar[1565]: linux-amd64/helm Sep 4 17:36:39.138486 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:36:39.140190 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:36:39.146964 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:36:39.150485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:36:39.155726 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:36:39.157863 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:36:39.157894 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:36:39.159659 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:36:39.159678 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:36:39.161640 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:36:39.164312 systemd-logind[1549]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:36:39.164344 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:36:39.168158 systemd-logind[1549]: New seat seat0. Sep 4 17:36:39.169941 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:36:39.179931 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:36:39.195535 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:36:39.198501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:36:39.221878 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:36:39.227771 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:36:39.228113 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:36:39.234640 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:36:39.257267 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:36:39.358854 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:36:39.372062 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:36:39.413168 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:36:39.430048 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:36:39.442066 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:36:39.442407 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:36:39.458368 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:36:39.479542 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:36:39.494444 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:36:39.499218 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:36:39.500802 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:36:39.551032 containerd[1568]: time="2024-09-04T17:36:39.550927879Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:36:39.577162 containerd[1568]: time="2024-09-04T17:36:39.576880507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:39.579305 containerd[1568]: time="2024-09-04T17:36:39.579059055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:39.579305 containerd[1568]: time="2024-09-04T17:36:39.579085896Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:36:39.579305 containerd[1568]: time="2024-09-04T17:36:39.579100954Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:36:39.579510 containerd[1568]: time="2024-09-04T17:36:39.579491587Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:36:39.579568 containerd[1568]: time="2024-09-04T17:36:39.579555747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:39.579701 containerd[1568]: time="2024-09-04T17:36:39.579683187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:39.579766 containerd[1568]: time="2024-09-04T17:36:39.579740965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:39.580108 containerd[1568]: time="2024-09-04T17:36:39.580089860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:39.580624 containerd[1568]: time="2024-09-04T17:36:39.580164069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:39.580624 containerd[1568]: time="2024-09-04T17:36:39.580184087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:39.580624 containerd[1568]: time="2024-09-04T17:36:39.580193575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:39.580624 containerd[1568]: time="2024-09-04T17:36:39.580317657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:39.580624 containerd[1568]: time="2024-09-04T17:36:39.580584398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:36:39.580945 containerd[1568]: time="2024-09-04T17:36:39.580905681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:36:39.581020 containerd[1568]: time="2024-09-04T17:36:39.580988356Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:36:39.581193 containerd[1568]: time="2024-09-04T17:36:39.581175688Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:36:39.581326 containerd[1568]: time="2024-09-04T17:36:39.581298007Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:36:39.591853 containerd[1568]: time="2024-09-04T17:36:39.591827419Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:36:39.591900 containerd[1568]: time="2024-09-04T17:36:39.591869047Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:36:39.591900 containerd[1568]: time="2024-09-04T17:36:39.591883845Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:36:39.591900 containerd[1568]: time="2024-09-04T17:36:39.591897130Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:36:39.591965 containerd[1568]: time="2024-09-04T17:36:39.591909373Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:36:39.592243 containerd[1568]: time="2024-09-04T17:36:39.592083259Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595189859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595338398Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595354288Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595370108Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595386819Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595403039Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595417637Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595433296Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595450779Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595474614Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595490794Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595504450Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595526762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596305 containerd[1568]: time="2024-09-04T17:36:39.595544295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595560164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595577156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595592415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595612953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595628162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595643551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595658359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595679338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595693505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595707321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595719233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595736996Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595775789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595790176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596590 containerd[1568]: time="2024-09-04T17:36:39.595804773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:36:39.596886 containerd[1568]: time="2024-09-04T17:36:39.596119294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:36:39.596886 containerd[1568]: time="2024-09-04T17:36:39.596139792Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:36:39.596886 containerd[1568]: time="2024-09-04T17:36:39.596151073Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:36:39.596886 containerd[1568]: time="2024-09-04T17:36:39.596166061Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:36:39.596886 containerd[1568]: time="2024-09-04T17:36:39.596178545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.596886 containerd[1568]: time="2024-09-04T17:36:39.596192942Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:36:39.596886 containerd[1568]: time="2024-09-04T17:36:39.596203532Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:36:39.596886 containerd[1568]: time="2024-09-04T17:36:39.596216125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:36:39.597040 containerd[1568]: time="2024-09-04T17:36:39.596627397Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:36:39.597040 containerd[1568]: time="2024-09-04T17:36:39.596688692Z" level=info msg="Connect containerd service" Sep 4 17:36:39.597040 containerd[1568]: time="2024-09-04T17:36:39.596737103Z" level=info msg="using legacy CRI server" Sep 4 17:36:39.597040 containerd[1568]: time="2024-09-04T17:36:39.596744066Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:36:39.597040 containerd[1568]: time="2024-09-04T17:36:39.596857930Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:36:39.597659 containerd[1568]: time="2024-09-04T17:36:39.597636601Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:36:39.597774 containerd[1568]: time="2024-09-04T17:36:39.597742119Z" level=info msg="Start subscribing containerd event" Sep 4 17:36:39.597803 containerd[1568]: time="2024-09-04T17:36:39.597788577Z" level=info msg="Start recovering state" Sep 4 17:36:39.597872 containerd[1568]: time="2024-09-04T17:36:39.597848780Z" level=info msg="Start event monitor" Sep 4 17:36:39.597908 containerd[1568]: time="2024-09-04T17:36:39.597890197Z" level=info msg="Start snapshots syncer" Sep 4 17:36:39.597908 containerd[1568]: time="2024-09-04T17:36:39.597903562Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:36:39.597908 containerd[1568]: time="2024-09-04T17:36:39.597915314Z" level=info msg="Start streaming server" Sep 4 17:36:39.598454 containerd[1568]: time="2024-09-04T17:36:39.598422617Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:36:39.598499 containerd[1568]: time="2024-09-04T17:36:39.598483020Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:36:39.598665 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:36:39.600575 containerd[1568]: time="2024-09-04T17:36:39.600547034Z" level=info msg="containerd successfully booted in 0.049884s" Sep 4 17:36:39.792272 tar[1565]: linux-amd64/LICENSE Sep 4 17:36:39.792272 tar[1565]: linux-amd64/README.md Sep 4 17:36:39.821004 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:36:40.340714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:36:40.342685 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:36:40.345923 systemd[1]: Startup finished in 7.251s (kernel) + 4.900s (userspace) = 12.152s. Sep 4 17:36:40.348317 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:36:41.143309 kubelet[1675]: E0904 17:36:41.143179 1675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:36:41.148340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:36:41.148711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:36:43.175329 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:36:43.189116 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:47926.service - OpenSSH per-connection server daemon (10.0.0.1:47926). Sep 4 17:36:43.238570 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 47926 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:36:43.240732 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:43.249441 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:36:43.265981 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:36:43.267650 systemd-logind[1549]: New session 1 of user core. Sep 4 17:36:43.279578 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:36:43.295105 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:36:43.298687 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:36:43.438974 systemd[1696]: Queued start job for default target default.target. Sep 4 17:36:43.439401 systemd[1696]: Created slice app.slice - User Application Slice. Sep 4 17:36:43.439419 systemd[1696]: Reached target paths.target - Paths. Sep 4 17:36:43.439432 systemd[1696]: Reached target timers.target - Timers. Sep 4 17:36:43.450860 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:36:43.458564 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:36:43.458699 systemd[1696]: Reached target sockets.target - Sockets. Sep 4 17:36:43.458722 systemd[1696]: Reached target basic.target - Basic System. Sep 4 17:36:43.458810 systemd[1696]: Reached target default.target - Main User Target. Sep 4 17:36:43.458850 systemd[1696]: Startup finished in 152ms. Sep 4 17:36:43.459410 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:36:43.461089 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:36:43.521165 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:47938.service - OpenSSH per-connection server daemon (10.0.0.1:47938). Sep 4 17:36:43.559228 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 47938 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:36:43.561228 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:43.565944 systemd-logind[1549]: New session 2 of user core. Sep 4 17:36:43.577077 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:36:43.635158 sshd[1708]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:43.647052 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:47948.service - OpenSSH per-connection server daemon (10.0.0.1:47948). Sep 4 17:36:43.647575 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:47938.service: Deactivated successfully. Sep 4 17:36:43.650334 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:36:43.651465 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:36:43.652707 systemd-logind[1549]: Removed session 2. Sep 4 17:36:43.678168 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 47948 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:36:43.679991 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:43.684021 systemd-logind[1549]: New session 3 of user core. Sep 4 17:36:43.694038 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:36:43.743726 sshd[1713]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:43.751983 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:47950.service - OpenSSH per-connection server daemon (10.0.0.1:47950). Sep 4 17:36:43.752455 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:47948.service: Deactivated successfully. Sep 4 17:36:43.754991 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:36:43.755800 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:36:43.756824 systemd-logind[1549]: Removed session 3. Sep 4 17:36:43.782322 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 47950 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:36:43.783844 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:43.788078 systemd-logind[1549]: New session 4 of user core. Sep 4 17:36:43.798019 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:36:43.851606 sshd[1721]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:43.871982 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:47966.service - OpenSSH per-connection server daemon (10.0.0.1:47966). Sep 4 17:36:43.872447 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:47950.service: Deactivated successfully. Sep 4 17:36:43.874975 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:36:43.875626 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:36:43.877188 systemd-logind[1549]: Removed session 4. Sep 4 17:36:43.903616 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 47966 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:36:43.905169 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:43.909126 systemd-logind[1549]: New session 5 of user core. Sep 4 17:36:43.919019 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:36:43.977812 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:36:43.978272 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:36:43.992203 sudo[1736]: pam_unix(sudo:session): session closed for user root Sep 4 17:36:43.994330 sshd[1729]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:44.002112 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:47968.service - OpenSSH per-connection server daemon (10.0.0.1:47968). Sep 4 17:36:44.002890 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:47966.service: Deactivated successfully. Sep 4 17:36:44.005396 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:36:44.006194 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:36:44.007523 systemd-logind[1549]: Removed session 5. Sep 4 17:36:44.034429 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 47968 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:36:44.035963 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:44.039911 systemd-logind[1549]: New session 6 of user core. Sep 4 17:36:44.055999 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:36:44.111463 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:36:44.111951 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:36:44.116515 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 4 17:36:44.123126 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:36:44.123462 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:36:44.137944 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:36:44.141499 auditctl[1749]: No rules Sep 4 17:36:44.142920 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:36:44.143260 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:36:44.145199 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:36:44.176907 augenrules[1768]: No rules Sep 4 17:36:44.177837 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:36:44.179369 sudo[1745]: pam_unix(sudo:session): session closed for user root Sep 4 17:36:44.181316 sshd[1738]: pam_unix(sshd:session): session closed for user core Sep 4 17:36:44.195005 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:47976.service - OpenSSH per-connection server daemon (10.0.0.1:47976). Sep 4 17:36:44.195524 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:47968.service: Deactivated successfully. Sep 4 17:36:44.197535 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:36:44.198169 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:36:44.199390 systemd-logind[1549]: Removed session 6. Sep 4 17:36:44.226128 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 47976 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:36:44.227682 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:36:44.231582 systemd-logind[1549]: New session 7 of user core. Sep 4 17:36:44.241001 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:36:44.293649 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:36:44.294018 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:36:44.403969 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:36:44.404227 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:36:44.690921 dockerd[1791]: time="2024-09-04T17:36:44.690730336Z" level=info msg="Starting up" Sep 4 17:36:45.336086 dockerd[1791]: time="2024-09-04T17:36:45.336029434Z" level=info msg="Loading containers: start." Sep 4 17:36:45.465784 kernel: Initializing XFRM netlink socket Sep 4 17:36:45.567203 systemd-networkd[1233]: docker0: Link UP Sep 4 17:36:45.587710 dockerd[1791]: time="2024-09-04T17:36:45.587573207Z" level=info msg="Loading containers: done." Sep 4 17:36:45.604868 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3840538475-merged.mount: Deactivated successfully. Sep 4 17:36:45.606209 dockerd[1791]: time="2024-09-04T17:36:45.606164449Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:36:45.606290 dockerd[1791]: time="2024-09-04T17:36:45.606275227Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:36:45.606419 dockerd[1791]: time="2024-09-04T17:36:45.606396465Z" level=info msg="Daemon has completed initialization" Sep 4 17:36:45.710534 dockerd[1791]: time="2024-09-04T17:36:45.710444841Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:36:45.711410 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:36:46.537982 containerd[1568]: time="2024-09-04T17:36:46.537852808Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:36:47.338824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764983564.mount: Deactivated successfully. Sep 4 17:36:48.461572 containerd[1568]: time="2024-09-04T17:36:48.461513036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:48.462518 containerd[1568]: time="2024-09-04T17:36:48.462456126Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735" Sep 4 17:36:48.463757 containerd[1568]: time="2024-09-04T17:36:48.463708146Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:48.466808 containerd[1568]: time="2024-09-04T17:36:48.466739505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:48.467916 containerd[1568]: time="2024-09-04T17:36:48.467884033Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 1.929984266s" Sep 4 17:36:48.467916 containerd[1568]: time="2024-09-04T17:36:48.467923818Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 17:36:48.491265 containerd[1568]: time="2024-09-04T17:36:48.491223256Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:36:49.999964 containerd[1568]: time="2024-09-04T17:36:49.999900880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:50.001083 containerd[1568]: time="2024-09-04T17:36:50.001042653Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709" Sep 4 17:36:50.002628 containerd[1568]: time="2024-09-04T17:36:50.002592141Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:50.005451 containerd[1568]: time="2024-09-04T17:36:50.005418425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:50.006687 containerd[1568]: time="2024-09-04T17:36:50.006636682Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 1.515373871s" Sep 4 17:36:50.006728 containerd[1568]: time="2024-09-04T17:36:50.006691394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 17:36:50.031819 containerd[1568]: time="2024-09-04T17:36:50.031773478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:36:51.396851 containerd[1568]: time="2024-09-04T17:36:51.396797085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:51.397645 containerd[1568]: time="2024-09-04T17:36:51.397607666Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777" Sep 4 17:36:51.398689 containerd[1568]: time="2024-09-04T17:36:51.398654911Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:51.398790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:36:51.401496 containerd[1568]: time="2024-09-04T17:36:51.401459555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:51.403095 containerd[1568]: time="2024-09-04T17:36:51.402283581Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.370463836s" Sep 4 17:36:51.403095 containerd[1568]: time="2024-09-04T17:36:51.402332283Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 17:36:51.405111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:36:51.431107 containerd[1568]: time="2024-09-04T17:36:51.431058796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:36:51.672893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:36:51.681333 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:36:52.280427 kubelet[2039]: E0904 17:36:52.280269 2039 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:36:52.288418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:36:52.288733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:36:53.965144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559530013.mount: Deactivated successfully. Sep 4 17:36:54.889143 containerd[1568]: time="2024-09-04T17:36:54.889059462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:54.889743 containerd[1568]: time="2024-09-04T17:36:54.889693031Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449" Sep 4 17:36:54.890985 containerd[1568]: time="2024-09-04T17:36:54.890951373Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:54.893294 containerd[1568]: time="2024-09-04T17:36:54.893262380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:54.893954 containerd[1568]: time="2024-09-04T17:36:54.893906829Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 3.462799913s" Sep 4 17:36:54.893954 containerd[1568]: time="2024-09-04T17:36:54.893945853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 17:36:54.916202 containerd[1568]: time="2024-09-04T17:36:54.916155115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:36:55.391419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992562958.mount: Deactivated successfully. Sep 4 17:36:55.396599 containerd[1568]: time="2024-09-04T17:36:55.396535546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:55.397271 containerd[1568]: time="2024-09-04T17:36:55.397222565Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:36:55.398242 containerd[1568]: time="2024-09-04T17:36:55.398186194Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:55.400401 containerd[1568]: time="2024-09-04T17:36:55.400361356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:55.401017 containerd[1568]: time="2024-09-04T17:36:55.400982221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 484.784877ms" Sep 4 17:36:55.401017 containerd[1568]: time="2024-09-04T17:36:55.401012869Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:36:55.440077 containerd[1568]: time="2024-09-04T17:36:55.439725344Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:36:56.175994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236035970.mount: Deactivated successfully. Sep 4 17:36:58.422276 containerd[1568]: time="2024-09-04T17:36:58.422201251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:58.422875 containerd[1568]: time="2024-09-04T17:36:58.422843947Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:36:58.424195 containerd[1568]: time="2024-09-04T17:36:58.424148325Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:58.427517 containerd[1568]: time="2024-09-04T17:36:58.427466733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:36:58.428776 containerd[1568]: time="2024-09-04T17:36:58.428721598Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.988935561s" Sep 4 17:36:58.428831 containerd[1568]: time="2024-09-04T17:36:58.428777483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:36:58.469378 containerd[1568]: time="2024-09-04T17:36:58.469330893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:36:59.330683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552101159.mount: Deactivated successfully. Sep 4 17:37:00.093868 containerd[1568]: time="2024-09-04T17:37:00.093805914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:00.094826 containerd[1568]: time="2024-09-04T17:37:00.094794580Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Sep 4 17:37:00.096061 containerd[1568]: time="2024-09-04T17:37:00.096037693Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:00.098536 containerd[1568]: time="2024-09-04T17:37:00.098472092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:00.099217 containerd[1568]: time="2024-09-04T17:37:00.099166114Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.629611612s" Sep 4 17:37:00.099217 containerd[1568]: time="2024-09-04T17:37:00.099211620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 17:37:02.360700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:37:02.369956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:02.381074 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:37:02.381218 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:37:02.381705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:02.386202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:02.410861 systemd[1]: Reloading requested from client PID 2223 ('systemctl') (unit session-7.scope)... Sep 4 17:37:02.410882 systemd[1]: Reloading... Sep 4 17:37:02.504783 zram_generator::config[2262]: No configuration found. Sep 4 17:37:03.454837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:37:03.533775 systemd[1]: Reloading finished in 1122 ms. Sep 4 17:37:03.586563 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:37:03.586674 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:37:03.587075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:03.588987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:03.739531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:03.745457 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:37:03.796646 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:37:03.796646 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:37:03.796646 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:37:03.797175 kubelet[2319]: I0904 17:37:03.796685 2319 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:37:03.954186 kubelet[2319]: I0904 17:37:03.954138 2319 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:37:03.954186 kubelet[2319]: I0904 17:37:03.954168 2319 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:37:03.954366 kubelet[2319]: I0904 17:37:03.954354 2319 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:37:03.970338 kubelet[2319]: I0904 17:37:03.970282 2319 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:37:03.971009 kubelet[2319]: E0904 17:37:03.970946 2319 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:03.981989 kubelet[2319]: I0904 17:37:03.981945 2319 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:37:03.983248 kubelet[2319]: I0904 17:37:03.983224 2319 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:37:03.983439 kubelet[2319]: I0904 17:37:03.983407 2319 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:37:03.983533 kubelet[2319]: I0904 17:37:03.983446 2319 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:37:03.983533 kubelet[2319]: I0904 17:37:03.983457 2319 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:37:03.984068 kubelet[2319]: I0904 17:37:03.984040 2319 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:37:03.985182 kubelet[2319]: I0904 17:37:03.985160 2319 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:37:03.985182 kubelet[2319]: I0904 17:37:03.985182 2319 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:37:03.985234 kubelet[2319]: I0904 17:37:03.985211 2319 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:37:03.985234 kubelet[2319]: I0904 17:37:03.985227 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:37:03.986605 kubelet[2319]: I0904 17:37:03.986569 2319 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:37:03.987332 kubelet[2319]: W0904 17:37:03.987279 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:03.987383 kubelet[2319]: E0904 17:37:03.987344 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:03.987665 kubelet[2319]: W0904 17:37:03.987626 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:03.987665 kubelet[2319]: E0904 17:37:03.987667 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:03.991511 kubelet[2319]: W0904 17:37:03.989389 2319 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:37:03.991511 kubelet[2319]: I0904 17:37:03.990451 2319 server.go:1232] "Started kubelet" Sep 4 17:37:03.993491 kubelet[2319]: I0904 17:37:03.992926 2319 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:37:03.993573 kubelet[2319]: I0904 17:37:03.992979 2319 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:37:03.993996 kubelet[2319]: I0904 17:37:03.993955 2319 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:37:03.994876 kubelet[2319]: I0904 17:37:03.994849 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:37:03.995563 kubelet[2319]: I0904 17:37:03.995114 2319 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:37:03.998008 kubelet[2319]: I0904 17:37:03.997971 2319 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:37:03.998886 kubelet[2319]: I0904 17:37:03.998213 2319 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:37:03.998886 kubelet[2319]: E0904 17:37:03.997853 2319 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f21b1e4108d7b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 37, 3, 990388664, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 37, 3, 990388664, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.18:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.18:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:37:03.998886 kubelet[2319]: I0904 17:37:03.998268 2319 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:37:03.998886 kubelet[2319]: E0904 17:37:03.998345 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" Sep 4 17:37:03.999137 kubelet[2319]: W0904 17:37:03.998582 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:03.999137 kubelet[2319]: E0904 17:37:03.998630 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:03.999657 kubelet[2319]: E0904 17:37:03.999392 2319 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:37:03.999657 kubelet[2319]: E0904 17:37:03.999422 2319 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:37:04.018352 kubelet[2319]: I0904 17:37:04.018313 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:37:04.020308 kubelet[2319]: I0904 17:37:04.020256 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:37:04.020308 kubelet[2319]: I0904 17:37:04.020295 2319 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:37:04.020504 kubelet[2319]: I0904 17:37:04.020326 2319 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:37:04.020504 kubelet[2319]: E0904 17:37:04.020401 2319 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:37:04.022931 kubelet[2319]: W0904 17:37:04.021740 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:04.022931 kubelet[2319]: E0904 17:37:04.021945 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:04.058076 kubelet[2319]: I0904 17:37:04.058042 2319 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:37:04.058076 kubelet[2319]: I0904 17:37:04.058069 2319 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:37:04.058248 kubelet[2319]: I0904 17:37:04.058094 2319 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:37:04.099872 kubelet[2319]: I0904 17:37:04.099806 2319 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:37:04.100220 kubelet[2319]: E0904 17:37:04.100190 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Sep 4 17:37:04.121370 kubelet[2319]: E0904 17:37:04.121320 2319 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:37:04.199114 kubelet[2319]: E0904 17:37:04.199070 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" Sep 4 17:37:04.301454 kubelet[2319]: I0904 17:37:04.301339 2319 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:37:04.301628 kubelet[2319]: E0904 17:37:04.301612 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Sep 4 17:37:04.321806 kubelet[2319]: E0904 17:37:04.321777 2319 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:37:04.600247 kubelet[2319]: E0904 17:37:04.600101 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" Sep 4 17:37:04.629346 kubelet[2319]: I0904 17:37:04.629291 2319 policy_none.go:49] "None policy: Start" Sep 4 17:37:04.630183 kubelet[2319]: I0904 17:37:04.630145 2319 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:37:04.630183 kubelet[2319]: I0904 17:37:04.630195 2319 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:37:04.638127 kubelet[2319]: I0904 17:37:04.637238 2319 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:37:04.638127 kubelet[2319]: I0904 17:37:04.637566 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:37:04.638598 kubelet[2319]: E0904 17:37:04.638549 2319 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:37:04.704018 kubelet[2319]: I0904 17:37:04.703959 2319 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:37:04.704457 kubelet[2319]: E0904 17:37:04.704421 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Sep 4 17:37:04.722629 kubelet[2319]: I0904 17:37:04.722583 2319 topology_manager.go:215] "Topology Admit Handler" podUID="79ac0df5fb8e93d80a6b2f0c5e542d5d" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:37:04.723822 kubelet[2319]: I0904 17:37:04.723791 2319 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:37:04.724887 kubelet[2319]: I0904 17:37:04.724864 2319 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:37:04.803475 kubelet[2319]: I0904 17:37:04.803411 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79ac0df5fb8e93d80a6b2f0c5e542d5d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"79ac0df5fb8e93d80a6b2f0c5e542d5d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:37:04.803475 kubelet[2319]: I0904 17:37:04.803465 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:04.803475 kubelet[2319]: I0904 17:37:04.803492 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:04.804066 kubelet[2319]: I0904 17:37:04.803517 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:37:04.804066 kubelet[2319]: I0904 17:37:04.803540 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79ac0df5fb8e93d80a6b2f0c5e542d5d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"79ac0df5fb8e93d80a6b2f0c5e542d5d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:37:04.804066 kubelet[2319]: I0904 17:37:04.803561 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79ac0df5fb8e93d80a6b2f0c5e542d5d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"79ac0df5fb8e93d80a6b2f0c5e542d5d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:37:04.804066 kubelet[2319]: I0904 17:37:04.803588 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:04.804066 kubelet[2319]: I0904 17:37:04.803612 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:04.804169 kubelet[2319]: I0904 17:37:04.803639 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:04.857082 kubelet[2319]: W0904 17:37:04.856949 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:04.857082 kubelet[2319]: E0904 17:37:04.857029 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:04.974133 kubelet[2319]: W0904 17:37:04.974053 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:04.974133 kubelet[2319]: E0904 17:37:04.974134 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:05.029989 kubelet[2319]: E0904 17:37:05.029937 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:05.030386 kubelet[2319]: E0904 17:37:05.030358 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:05.030757 containerd[1568]: time="2024-09-04T17:37:05.030718090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:05.031180 containerd[1568]: time="2024-09-04T17:37:05.030724071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:79ac0df5fb8e93d80a6b2f0c5e542d5d,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:05.032840 kubelet[2319]: E0904 17:37:05.032823 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:05.033187 containerd[1568]: time="2024-09-04T17:37:05.033138723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:05.219595 kubelet[2319]: W0904 17:37:05.219527 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:05.219595 kubelet[2319]: E0904 17:37:05.219603 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:05.219887 kubelet[2319]: W0904 17:37:05.219845 2319 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:05.219914 kubelet[2319]: E0904 17:37:05.219889 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:05.401531 kubelet[2319]: E0904 17:37:05.401479 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="1.6s" Sep 4 17:37:05.506351 kubelet[2319]: I0904 17:37:05.506252 2319 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:37:05.506585 kubelet[2319]: E0904 17:37:05.506560 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Sep 4 17:37:06.082413 kubelet[2319]: E0904 17:37:06.082373 2319 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.18:6443: connect: connection refused Sep 4 17:37:06.659805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017760631.mount: Deactivated successfully. Sep 4 17:37:06.668065 containerd[1568]: time="2024-09-04T17:37:06.667981512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:06.669827 containerd[1568]: time="2024-09-04T17:37:06.669776471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:37:06.671090 containerd[1568]: time="2024-09-04T17:37:06.671045422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:06.672013 containerd[1568]: time="2024-09-04T17:37:06.671967793Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:06.672792 containerd[1568]: time="2024-09-04T17:37:06.672733510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:37:06.674132 containerd[1568]: time="2024-09-04T17:37:06.674072413Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:06.674594 containerd[1568]: time="2024-09-04T17:37:06.674556101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:37:06.676313 containerd[1568]: time="2024-09-04T17:37:06.676271921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:37:06.678842 containerd[1568]: time="2024-09-04T17:37:06.678812820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.64787644s" Sep 4 17:37:06.680152 containerd[1568]: time="2024-09-04T17:37:06.680103101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.646857899s" Sep 4 17:37:06.682764 containerd[1568]: time="2024-09-04T17:37:06.682712188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.651872019s" Sep 4 17:37:06.838216 containerd[1568]: time="2024-09-04T17:37:06.838051246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:06.838471 containerd[1568]: time="2024-09-04T17:37:06.838220814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:06.838471 containerd[1568]: time="2024-09-04T17:37:06.838236073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:06.838471 containerd[1568]: time="2024-09-04T17:37:06.838350598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:06.839057 containerd[1568]: time="2024-09-04T17:37:06.838927501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:06.839057 containerd[1568]: time="2024-09-04T17:37:06.839000668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:06.839057 containerd[1568]: time="2024-09-04T17:37:06.839011619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:06.843765 containerd[1568]: time="2024-09-04T17:37:06.839286755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:06.843765 containerd[1568]: time="2024-09-04T17:37:06.840619146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:06.843765 containerd[1568]: time="2024-09-04T17:37:06.840678798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:06.843765 containerd[1568]: time="2024-09-04T17:37:06.840693976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:06.843765 containerd[1568]: time="2024-09-04T17:37:06.843245244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:06.898042 containerd[1568]: time="2024-09-04T17:37:06.897998785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f7aa21d42331981ee702f509858b8dad503cc58982a17f26cb8c5775bc693d5\"" Sep 4 17:37:06.899384 kubelet[2319]: E0904 17:37:06.899361 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:06.904616 containerd[1568]: time="2024-09-04T17:37:06.904584565Z" level=info msg="CreateContainer within sandbox \"3f7aa21d42331981ee702f509858b8dad503cc58982a17f26cb8c5775bc693d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:37:06.905094 containerd[1568]: time="2024-09-04T17:37:06.905076098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:79ac0df5fb8e93d80a6b2f0c5e542d5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b23d59a826bfb2f01c2b1ad24ae2dae7222b67a5d1273a2a89674238aae6fb3\"" Sep 4 17:37:06.906628 containerd[1568]: time="2024-09-04T17:37:06.906602382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,} returns sandbox id \"64aca983fe0ba2345f5b0638366301c2c83bd0bb6fc1371c08d914980e02535e\"" Sep 4 17:37:06.907361 kubelet[2319]: E0904 17:37:06.907336 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:06.908100 kubelet[2319]: E0904 17:37:06.908074 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:06.909601 containerd[1568]: time="2024-09-04T17:37:06.909574610Z" level=info msg="CreateContainer within sandbox \"1b23d59a826bfb2f01c2b1ad24ae2dae7222b67a5d1273a2a89674238aae6fb3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:37:06.911201 containerd[1568]: time="2024-09-04T17:37:06.911088552Z" level=info msg="CreateContainer within sandbox \"64aca983fe0ba2345f5b0638366301c2c83bd0bb6fc1371c08d914980e02535e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:37:07.002735 kubelet[2319]: E0904 17:37:07.002709 2319 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="3.2s" Sep 4 17:37:07.046762 containerd[1568]: time="2024-09-04T17:37:07.046658748Z" level=info msg="CreateContainer within sandbox \"3f7aa21d42331981ee702f509858b8dad503cc58982a17f26cb8c5775bc693d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ccc031566b696a7ab511ab6ff102ae6d499acf9653175c9f53a89b7891cf9c34\"" Sep 4 17:37:07.047351 containerd[1568]: time="2024-09-04T17:37:07.047315440Z" level=info msg="StartContainer for \"ccc031566b696a7ab511ab6ff102ae6d499acf9653175c9f53a89b7891cf9c34\"" Sep 4 17:37:07.051697 containerd[1568]: time="2024-09-04T17:37:07.051658942Z" level=info msg="CreateContainer within sandbox \"1b23d59a826bfb2f01c2b1ad24ae2dae7222b67a5d1273a2a89674238aae6fb3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"11d6e106d3d94aec285727c2c5361fcfcbb6d003b147ea8b79f0c749a3cb1980\"" Sep 4 17:37:07.052848 containerd[1568]: time="2024-09-04T17:37:07.052808860Z" level=info msg="StartContainer for \"11d6e106d3d94aec285727c2c5361fcfcbb6d003b147ea8b79f0c749a3cb1980\"" Sep 4 17:37:07.060595 containerd[1568]: time="2024-09-04T17:37:07.060536413Z" level=info msg="CreateContainer within sandbox \"64aca983fe0ba2345f5b0638366301c2c83bd0bb6fc1371c08d914980e02535e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2cf40d3aa480c96e9e63ba11786e2c456e1ea808388ab1b3c102a884a5e19dca\"" Sep 4 17:37:07.061572 containerd[1568]: time="2024-09-04T17:37:07.061510541Z" level=info msg="StartContainer for \"2cf40d3aa480c96e9e63ba11786e2c456e1ea808388ab1b3c102a884a5e19dca\"" Sep 4 17:37:07.107957 kubelet[2319]: I0904 17:37:07.107914 2319 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:37:07.108455 kubelet[2319]: E0904 17:37:07.108344 2319 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Sep 4 17:37:07.416841 containerd[1568]: time="2024-09-04T17:37:07.415631921Z" level=info msg="StartContainer for \"2cf40d3aa480c96e9e63ba11786e2c456e1ea808388ab1b3c102a884a5e19dca\" returns successfully" Sep 4 17:37:07.416841 containerd[1568]: time="2024-09-04T17:37:07.415645597Z" level=info msg="StartContainer for \"ccc031566b696a7ab511ab6ff102ae6d499acf9653175c9f53a89b7891cf9c34\" returns successfully" Sep 4 17:37:07.416841 containerd[1568]: time="2024-09-04T17:37:07.415655946Z" level=info msg="StartContainer for \"11d6e106d3d94aec285727c2c5361fcfcbb6d003b147ea8b79f0c749a3cb1980\" returns successfully" Sep 4 17:37:08.037682 kubelet[2319]: E0904 17:37:08.033373 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:08.037682 kubelet[2319]: E0904 17:37:08.034953 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:08.037682 kubelet[2319]: E0904 17:37:08.037531 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:09.047816 kubelet[2319]: E0904 17:37:09.045176 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:09.047816 kubelet[2319]: E0904 17:37:09.045704 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:09.048615 kubelet[2319]: E0904 17:37:09.048465 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:09.693937 kubelet[2319]: E0904 17:37:09.693845 2319 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f21b1e4108d7b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 37, 3, 990388664, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 37, 3, 990388664, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'namespaces "default" not found' (will not retry!) Sep 4 17:37:10.000838 kubelet[2319]: I0904 17:37:10.000631 2319 apiserver.go:52] "Watching apiserver" Sep 4 17:37:10.045414 kubelet[2319]: E0904 17:37:10.045372 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:10.045526 kubelet[2319]: E0904 17:37:10.045507 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:10.099433 kubelet[2319]: I0904 17:37:10.099394 2319 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:37:10.208034 kubelet[2319]: E0904 17:37:10.207708 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:37:10.310781 kubelet[2319]: I0904 17:37:10.310617 2319 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:37:10.314930 kubelet[2319]: I0904 17:37:10.314899 2319 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:37:11.281027 kubelet[2319]: E0904 17:37:11.280953 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:11.709604 kubelet[2319]: E0904 17:37:11.709556 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:12.047586 kubelet[2319]: E0904 17:37:12.047215 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:12.047586 kubelet[2319]: E0904 17:37:12.047287 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:13.048688 systemd[1]: Reloading requested from client PID 2597 ('systemctl') (unit session-7.scope)... Sep 4 17:37:13.048713 systemd[1]: Reloading... Sep 4 17:37:13.131803 zram_generator::config[2637]: No configuration found. Sep 4 17:37:13.253081 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:37:13.335379 systemd[1]: Reloading finished in 286 ms. Sep 4 17:37:13.372644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:13.395480 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:37:13.396055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:13.410399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:13.588991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:13.594914 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:37:13.655933 kubelet[2689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:37:13.655933 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:37:13.655933 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:37:13.656364 kubelet[2689]: I0904 17:37:13.656006 2689 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:37:13.658645 sudo[2703]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:37:13.659052 sudo[2703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 17:37:13.662030 kubelet[2689]: I0904 17:37:13.661994 2689 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:37:13.662030 kubelet[2689]: I0904 17:37:13.662027 2689 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:37:13.662281 kubelet[2689]: I0904 17:37:13.662261 2689 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:37:13.663964 kubelet[2689]: I0904 17:37:13.663942 2689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:37:13.665819 kubelet[2689]: I0904 17:37:13.665168 2689 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:37:13.676918 kubelet[2689]: I0904 17:37:13.676866 2689 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:37:13.677665 kubelet[2689]: I0904 17:37:13.677647 2689 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:37:13.677967 kubelet[2689]: I0904 17:37:13.677931 2689 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:37:13.678061 kubelet[2689]: I0904 17:37:13.677977 2689 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:37:13.678061 kubelet[2689]: I0904 17:37:13.677991 2689 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:37:13.678101 kubelet[2689]: I0904 17:37:13.678061 2689 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:37:13.678457 kubelet[2689]: I0904 17:37:13.678201 2689 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:37:13.678779 kubelet[2689]: I0904 17:37:13.678662 2689 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:37:13.678876 kubelet[2689]: I0904 17:37:13.678856 2689 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:37:13.678913 kubelet[2689]: I0904 17:37:13.678893 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:37:13.680912 kubelet[2689]: I0904 17:37:13.680880 2689 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:37:13.681741 kubelet[2689]: I0904 17:37:13.681715 2689 server.go:1232] "Started kubelet" Sep 4 17:37:13.686837 kubelet[2689]: I0904 17:37:13.686802 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:37:13.691885 kubelet[2689]: I0904 17:37:13.691802 2689 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:37:13.692087 kubelet[2689]: I0904 17:37:13.686997 2689 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:37:13.692474 kubelet[2689]: I0904 17:37:13.692455 2689 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:37:13.693138 kubelet[2689]: I0904 17:37:13.693109 2689 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:37:13.693302 kubelet[2689]: E0904 17:37:13.693285 2689 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:37:13.693494 kubelet[2689]: I0904 17:37:13.687041 2689 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:37:13.693849 kubelet[2689]: I0904 17:37:13.693687 2689 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:37:13.693849 kubelet[2689]: E0904 17:37:13.687894 2689 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:37:13.693849 kubelet[2689]: E0904 17:37:13.693729 2689 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:37:13.693849 kubelet[2689]: I0904 17:37:13.693736 2689 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:37:13.711773 kubelet[2689]: I0904 17:37:13.709968 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:37:13.711773 kubelet[2689]: I0904 17:37:13.711232 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:37:13.711773 kubelet[2689]: I0904 17:37:13.711256 2689 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:37:13.711773 kubelet[2689]: I0904 17:37:13.711275 2689 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:37:13.711773 kubelet[2689]: E0904 17:37:13.711330 2689 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:37:13.798382 kubelet[2689]: I0904 17:37:13.798338 2689 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:37:13.798382 kubelet[2689]: I0904 17:37:13.798370 2689 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:37:13.798382 kubelet[2689]: I0904 17:37:13.798391 2689 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:37:13.798648 kubelet[2689]: I0904 17:37:13.798559 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:37:13.798648 kubelet[2689]: I0904 17:37:13.798583 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:37:13.798648 kubelet[2689]: I0904 17:37:13.798591 2689 policy_none.go:49] "None policy: Start" Sep 4 17:37:13.798793 kubelet[2689]: I0904 17:37:13.798771 2689 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:37:13.801417 kubelet[2689]: I0904 17:37:13.801397 2689 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:37:13.801516 kubelet[2689]: I0904 17:37:13.801501 2689 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:37:13.802099 kubelet[2689]: I0904 17:37:13.802082 2689 state_mem.go:75] "Updated machine memory state" Sep 4 17:37:13.805567 kubelet[2689]: I0904 17:37:13.805529 2689 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Sep 4 17:37:13.805999 kubelet[2689]: I0904 17:37:13.805893 2689 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:37:13.809135 kubelet[2689]: I0904 17:37:13.808733 2689 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:37:13.810189 kubelet[2689]: I0904 17:37:13.810157 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:37:13.814819 kubelet[2689]: I0904 17:37:13.812460 2689 topology_manager.go:215] "Topology Admit Handler" podUID="79ac0df5fb8e93d80a6b2f0c5e542d5d" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:37:13.814819 kubelet[2689]: I0904 17:37:13.812566 2689 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:37:13.814819 kubelet[2689]: I0904 17:37:13.812609 2689 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:37:13.824668 kubelet[2689]: E0904 17:37:13.824290 2689 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:13.824841 kubelet[2689]: E0904 17:37:13.824790 2689 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:37:13.994128 kubelet[2689]: I0904 17:37:13.994066 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:13.994128 kubelet[2689]: I0904 17:37:13.994123 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:13.994128 kubelet[2689]: I0904 17:37:13.994149 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:13.994389 kubelet[2689]: I0904 17:37:13.994171 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79ac0df5fb8e93d80a6b2f0c5e542d5d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"79ac0df5fb8e93d80a6b2f0c5e542d5d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:37:13.994389 kubelet[2689]: I0904 17:37:13.994305 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79ac0df5fb8e93d80a6b2f0c5e542d5d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"79ac0df5fb8e93d80a6b2f0c5e542d5d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:37:13.994389 kubelet[2689]: I0904 17:37:13.994371 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:13.994483 kubelet[2689]: I0904 17:37:13.994397 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:37:13.994483 kubelet[2689]: I0904 17:37:13.994437 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:37:13.994483 kubelet[2689]: I0904 17:37:13.994463 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79ac0df5fb8e93d80a6b2f0c5e542d5d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"79ac0df5fb8e93d80a6b2f0c5e542d5d\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:37:14.122529 kubelet[2689]: E0904 17:37:14.122471 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:14.125429 kubelet[2689]: E0904 17:37:14.125398 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:14.126167 kubelet[2689]: E0904 17:37:14.126013 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:14.176121 sudo[2703]: pam_unix(sudo:session): session closed for user root Sep 4 17:37:14.679677 kubelet[2689]: I0904 17:37:14.679607 2689 apiserver.go:52] "Watching apiserver" Sep 4 17:37:14.693094 kubelet[2689]: I0904 17:37:14.693029 2689 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:37:14.732048 kubelet[2689]: E0904 17:37:14.731800 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:14.732048 kubelet[2689]: E0904 17:37:14.731947 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:14.736371 kubelet[2689]: E0904 17:37:14.736324 2689 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:37:14.736371 kubelet[2689]: E0904 17:37:14.736868 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:14.754857 kubelet[2689]: I0904 17:37:14.754792 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.754723584 podCreationTimestamp="2024-09-04 17:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:14.748357042 +0000 UTC m=+1.148573725" watchObservedRunningTime="2024-09-04 17:37:14.754723584 +0000 UTC m=+1.154940247" Sep 4 17:37:14.755029 kubelet[2689]: I0904 17:37:14.754889 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.754875224 podCreationTimestamp="2024-09-04 17:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:14.754545243 +0000 UTC m=+1.154761916" watchObservedRunningTime="2024-09-04 17:37:14.754875224 +0000 UTC m=+1.155091897" Sep 4 17:37:14.759595 kubelet[2689]: I0904 17:37:14.759561 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.75951537 podCreationTimestamp="2024-09-04 17:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:14.759260712 +0000 UTC m=+1.159477385" watchObservedRunningTime="2024-09-04 17:37:14.75951537 +0000 UTC m=+1.159732043" Sep 4 17:37:15.537274 sudo[1781]: pam_unix(sudo:session): session closed for user root Sep 4 17:37:15.539596 sshd[1775]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:15.545022 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:47976.service: Deactivated successfully. Sep 4 17:37:15.548284 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:37:15.549252 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:37:15.550208 systemd-logind[1549]: Removed session 7. Sep 4 17:37:15.732871 kubelet[2689]: E0904 17:37:15.732818 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:15.732871 kubelet[2689]: E0904 17:37:15.732885 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:21.612890 kubelet[2689]: E0904 17:37:21.612837 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:21.742287 kubelet[2689]: E0904 17:37:21.742258 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:22.245848 kubelet[2689]: E0904 17:37:22.245809 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:22.743961 kubelet[2689]: E0904 17:37:22.743920 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:24.196907 update_engine[1557]: I0904 17:37:24.196818 1557 update_attempter.cc:509] Updating boot flags... Sep 4 17:37:24.223813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2775) Sep 4 17:37:24.260939 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2775) Sep 4 17:37:25.285278 kubelet[2689]: E0904 17:37:25.285237 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:26.318462 kubelet[2689]: I0904 17:37:26.318414 2689 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:37:26.319056 containerd[1568]: time="2024-09-04T17:37:26.318845216Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:37:26.319318 kubelet[2689]: I0904 17:37:26.319048 2689 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:37:27.026429 kubelet[2689]: I0904 17:37:27.026356 2689 topology_manager.go:215] "Topology Admit Handler" podUID="26c0f59f-4184-4fb6-8a4c-cb4f9f354979" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-bfvk4" Sep 4 17:37:27.070070 kubelet[2689]: I0904 17:37:27.069451 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26c0f59f-4184-4fb6-8a4c-cb4f9f354979-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-bfvk4\" (UID: \"26c0f59f-4184-4fb6-8a4c-cb4f9f354979\") " pod="kube-system/cilium-operator-6bc8ccdb58-bfvk4" Sep 4 17:37:27.070070 kubelet[2689]: I0904 17:37:27.069532 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fzlh\" (UniqueName: \"kubernetes.io/projected/26c0f59f-4184-4fb6-8a4c-cb4f9f354979-kube-api-access-9fzlh\") pod \"cilium-operator-6bc8ccdb58-bfvk4\" (UID: \"26c0f59f-4184-4fb6-8a4c-cb4f9f354979\") " pod="kube-system/cilium-operator-6bc8ccdb58-bfvk4" Sep 4 17:37:27.103084 kubelet[2689]: I0904 17:37:27.103024 2689 topology_manager.go:215] "Topology Admit Handler" podUID="1f8f353e-f80f-4641-80a6-5852d06ebaf3" podNamespace="kube-system" podName="kube-proxy-l2zp8" Sep 4 17:37:27.105689 kubelet[2689]: I0904 17:37:27.105579 2689 topology_manager.go:215] "Topology Admit Handler" podUID="7f6d81b0-6e69-4402-ae88-e5a020af4b7c" podNamespace="kube-system" podName="cilium-jgkpt" Sep 4 17:37:27.170188 kubelet[2689]: I0904 17:37:27.170127 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f8f353e-f80f-4641-80a6-5852d06ebaf3-xtables-lock\") pod \"kube-proxy-l2zp8\" (UID: \"1f8f353e-f80f-4641-80a6-5852d06ebaf3\") " pod="kube-system/kube-proxy-l2zp8" Sep 4 17:37:27.170362 kubelet[2689]: I0904 17:37:27.170233 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-lib-modules\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170362 kubelet[2689]: I0904 17:37:27.170267 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-config-path\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170362 kubelet[2689]: I0904 17:37:27.170293 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-hostproc\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170362 kubelet[2689]: I0904 17:37:27.170316 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f8f353e-f80f-4641-80a6-5852d06ebaf3-kube-proxy\") pod \"kube-proxy-l2zp8\" (UID: \"1f8f353e-f80f-4641-80a6-5852d06ebaf3\") " pod="kube-system/kube-proxy-l2zp8" Sep 4 17:37:27.170362 kubelet[2689]: I0904 17:37:27.170357 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-cgroup\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170518 kubelet[2689]: I0904 17:37:27.170420 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-xtables-lock\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170518 kubelet[2689]: I0904 17:37:27.170461 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-host-proc-sys-kernel\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170611 kubelet[2689]: I0904 17:37:27.170567 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxjv\" (UniqueName: \"kubernetes.io/projected/1f8f353e-f80f-4641-80a6-5852d06ebaf3-kube-api-access-wlxjv\") pod \"kube-proxy-l2zp8\" (UID: \"1f8f353e-f80f-4641-80a6-5852d06ebaf3\") " pod="kube-system/kube-proxy-l2zp8" Sep 4 17:37:27.170611 kubelet[2689]: I0904 17:37:27.170606 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-etc-cni-netd\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170687 kubelet[2689]: I0904 17:37:27.170631 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-host-proc-sys-net\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170687 kubelet[2689]: I0904 17:37:27.170658 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cni-path\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170767 kubelet[2689]: I0904 17:37:27.170704 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-clustermesh-secrets\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170767 kubelet[2689]: I0904 17:37:27.170732 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-hubble-tls\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170811 kubelet[2689]: I0904 17:37:27.170790 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z96wd\" (UniqueName: \"kubernetes.io/projected/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-kube-api-access-z96wd\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.170877 kubelet[2689]: I0904 17:37:27.170854 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-run\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.171013 kubelet[2689]: I0904 17:37:27.170975 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-bpf-maps\") pod \"cilium-jgkpt\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " pod="kube-system/cilium-jgkpt" Sep 4 17:37:27.171066 kubelet[2689]: I0904 17:37:27.171024 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f8f353e-f80f-4641-80a6-5852d06ebaf3-lib-modules\") pod \"kube-proxy-l2zp8\" (UID: \"1f8f353e-f80f-4641-80a6-5852d06ebaf3\") " pod="kube-system/kube-proxy-l2zp8" Sep 4 17:37:27.354395 kubelet[2689]: E0904 17:37:27.353891 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:27.354984 containerd[1568]: time="2024-09-04T17:37:27.354774431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-bfvk4,Uid:26c0f59f-4184-4fb6-8a4c-cb4f9f354979,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:27.384889 containerd[1568]: time="2024-09-04T17:37:27.384707985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:27.384889 containerd[1568]: time="2024-09-04T17:37:27.384835366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:27.384889 containerd[1568]: time="2024-09-04T17:37:27.384850284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:27.385135 containerd[1568]: time="2024-09-04T17:37:27.384984718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:27.414339 kubelet[2689]: E0904 17:37:27.414129 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:27.416841 containerd[1568]: time="2024-09-04T17:37:27.415496697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l2zp8,Uid:1f8f353e-f80f-4641-80a6-5852d06ebaf3,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:27.417081 kubelet[2689]: E0904 17:37:27.416772 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:27.417789 containerd[1568]: time="2024-09-04T17:37:27.417466136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgkpt,Uid:7f6d81b0-6e69-4402-ae88-e5a020af4b7c,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:27.443152 containerd[1568]: time="2024-09-04T17:37:27.443071946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-bfvk4,Uid:26c0f59f-4184-4fb6-8a4c-cb4f9f354979,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531\"" Sep 4 17:37:27.443991 kubelet[2689]: E0904 17:37:27.443964 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:27.445148 containerd[1568]: time="2024-09-04T17:37:27.445123560Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:37:27.496930 containerd[1568]: time="2024-09-04T17:37:27.496795811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:27.496930 containerd[1568]: time="2024-09-04T17:37:27.496885642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:27.497093 containerd[1568]: time="2024-09-04T17:37:27.496911310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:27.497093 containerd[1568]: time="2024-09-04T17:37:27.497060292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:27.505444 containerd[1568]: time="2024-09-04T17:37:27.505292108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:27.505444 containerd[1568]: time="2024-09-04T17:37:27.505362110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:27.505444 containerd[1568]: time="2024-09-04T17:37:27.505377589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:27.505646 containerd[1568]: time="2024-09-04T17:37:27.505508006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:27.549801 containerd[1568]: time="2024-09-04T17:37:27.549710625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l2zp8,Uid:1f8f353e-f80f-4641-80a6-5852d06ebaf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bca08d2fcfe455369ee77d988e858e95ecd647a4a2f98ccf80f3b4e4131efd04\"" Sep 4 17:37:27.551265 containerd[1568]: time="2024-09-04T17:37:27.551226856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgkpt,Uid:7f6d81b0-6e69-4402-ae88-e5a020af4b7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\"" Sep 4 17:37:27.552477 kubelet[2689]: E0904 17:37:27.551685 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:27.552477 kubelet[2689]: E0904 17:37:27.552314 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:27.554709 containerd[1568]: time="2024-09-04T17:37:27.554604220Z" level=info msg="CreateContainer within sandbox \"bca08d2fcfe455369ee77d988e858e95ecd647a4a2f98ccf80f3b4e4131efd04\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:37:27.575952 containerd[1568]: time="2024-09-04T17:37:27.575883368Z" level=info msg="CreateContainer within sandbox \"bca08d2fcfe455369ee77d988e858e95ecd647a4a2f98ccf80f3b4e4131efd04\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f37f0c2b8d2f365a67a0efcacf8b5080ae5438a36d7df26dbcb55231074dc214\"" Sep 4 17:37:27.576667 containerd[1568]: time="2024-09-04T17:37:27.576629862Z" level=info msg="StartContainer for \"f37f0c2b8d2f365a67a0efcacf8b5080ae5438a36d7df26dbcb55231074dc214\"" Sep 4 17:37:27.650172 containerd[1568]: time="2024-09-04T17:37:27.649467597Z" level=info msg="StartContainer for \"f37f0c2b8d2f365a67a0efcacf8b5080ae5438a36d7df26dbcb55231074dc214\" returns successfully" Sep 4 17:37:27.755229 kubelet[2689]: E0904 17:37:27.755195 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:27.763006 kubelet[2689]: I0904 17:37:27.762948 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l2zp8" podStartSLOduration=0.762901531 podCreationTimestamp="2024-09-04 17:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:27.762184745 +0000 UTC m=+14.162401418" watchObservedRunningTime="2024-09-04 17:37:27.762901531 +0000 UTC m=+14.163118204" Sep 4 17:37:28.874213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786581916.mount: Deactivated successfully. Sep 4 17:37:29.668542 containerd[1568]: time="2024-09-04T17:37:29.668468872Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:29.670865 containerd[1568]: time="2024-09-04T17:37:29.670819378Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907161" Sep 4 17:37:29.677150 containerd[1568]: time="2024-09-04T17:37:29.677109123Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:29.678724 containerd[1568]: time="2024-09-04T17:37:29.678670226Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.233518473s" Sep 4 17:37:29.678724 containerd[1568]: time="2024-09-04T17:37:29.678705793Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 17:37:29.679641 containerd[1568]: time="2024-09-04T17:37:29.679422359Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:37:29.680965 containerd[1568]: time="2024-09-04T17:37:29.680922575Z" level=info msg="CreateContainer within sandbox \"f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:37:29.767486 containerd[1568]: time="2024-09-04T17:37:29.767428054Z" level=info msg="CreateContainer within sandbox \"f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\"" Sep 4 17:37:29.768198 containerd[1568]: time="2024-09-04T17:37:29.768002380Z" level=info msg="StartContainer for \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\"" Sep 4 17:37:29.851061 containerd[1568]: time="2024-09-04T17:37:29.851000435Z" level=info msg="StartContainer for \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\" returns successfully" Sep 4 17:37:30.803086 kubelet[2689]: E0904 17:37:30.803045 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:31.947904 kubelet[2689]: E0904 17:37:31.947851 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:33.505879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872882866.mount: Deactivated successfully. Sep 4 17:37:35.750725 containerd[1568]: time="2024-09-04T17:37:35.750641115Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:35.751431 containerd[1568]: time="2024-09-04T17:37:35.751352688Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735327" Sep 4 17:37:35.752545 containerd[1568]: time="2024-09-04T17:37:35.752505713Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:35.754293 containerd[1568]: time="2024-09-04T17:37:35.754247919Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.074786056s" Sep 4 17:37:35.754359 containerd[1568]: time="2024-09-04T17:37:35.754290610Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 17:37:35.756512 containerd[1568]: time="2024-09-04T17:37:35.756465904Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:37:35.788979 containerd[1568]: time="2024-09-04T17:37:35.788901529Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\"" Sep 4 17:37:35.789724 containerd[1568]: time="2024-09-04T17:37:35.789604085Z" level=info msg="StartContainer for \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\"" Sep 4 17:37:35.851173 containerd[1568]: time="2024-09-04T17:37:35.851121796Z" level=info msg="StartContainer for \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\" returns successfully" Sep 4 17:37:36.501300 containerd[1568]: time="2024-09-04T17:37:36.499594537Z" level=info msg="shim disconnected" id=60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5 namespace=k8s.io Sep 4 17:37:36.501300 containerd[1568]: time="2024-09-04T17:37:36.501278483Z" level=warning msg="cleaning up after shim disconnected" id=60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5 namespace=k8s.io Sep 4 17:37:36.501300 containerd[1568]: time="2024-09-04T17:37:36.501297920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:36.766483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5-rootfs.mount: Deactivated successfully. Sep 4 17:37:36.812511 kubelet[2689]: E0904 17:37:36.812326 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:36.814671 containerd[1568]: time="2024-09-04T17:37:36.814633961Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:37:36.827511 kubelet[2689]: I0904 17:37:36.827327 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-bfvk4" podStartSLOduration=8.59275769 podCreationTimestamp="2024-09-04 17:37:26 +0000 UTC" firstStartedPulling="2024-09-04 17:37:27.444585151 +0000 UTC m=+13.844801824" lastFinishedPulling="2024-09-04 17:37:29.679098826 +0000 UTC m=+16.079315499" observedRunningTime="2024-09-04 17:37:30.825992156 +0000 UTC m=+17.226208829" watchObservedRunningTime="2024-09-04 17:37:36.827271365 +0000 UTC m=+23.227488028" Sep 4 17:37:36.830903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135769912.mount: Deactivated successfully. Sep 4 17:37:36.832768 containerd[1568]: time="2024-09-04T17:37:36.832694794Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\"" Sep 4 17:37:36.833464 containerd[1568]: time="2024-09-04T17:37:36.833433096Z" level=info msg="StartContainer for \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\"" Sep 4 17:37:36.892662 containerd[1568]: time="2024-09-04T17:37:36.892609625Z" level=info msg="StartContainer for \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\" returns successfully" Sep 4 17:37:36.904797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:37:36.905147 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:37:36.905229 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:37:36.915507 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:37:36.932875 containerd[1568]: time="2024-09-04T17:37:36.932805856Z" level=info msg="shim disconnected" id=20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003 namespace=k8s.io Sep 4 17:37:36.932875 containerd[1568]: time="2024-09-04T17:37:36.932874005Z" level=warning msg="cleaning up after shim disconnected" id=20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003 namespace=k8s.io Sep 4 17:37:36.933152 containerd[1568]: time="2024-09-04T17:37:36.932882481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:36.933449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:37:37.767529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003-rootfs.mount: Deactivated successfully. Sep 4 17:37:37.817329 kubelet[2689]: E0904 17:37:37.817241 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:37.823569 containerd[1568]: time="2024-09-04T17:37:37.823173189Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:37:37.984551 containerd[1568]: time="2024-09-04T17:37:37.984499943Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\"" Sep 4 17:37:37.985236 containerd[1568]: time="2024-09-04T17:37:37.985195074Z" level=info msg="StartContainer for \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\"" Sep 4 17:37:38.057155 containerd[1568]: time="2024-09-04T17:37:38.057008794Z" level=info msg="StartContainer for \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\" returns successfully" Sep 4 17:37:38.088165 containerd[1568]: time="2024-09-04T17:37:38.088075262Z" level=info msg="shim disconnected" id=29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e namespace=k8s.io Sep 4 17:37:38.088165 containerd[1568]: time="2024-09-04T17:37:38.088139312Z" level=warning msg="cleaning up after shim disconnected" id=29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e namespace=k8s.io Sep 4 17:37:38.088165 containerd[1568]: time="2024-09-04T17:37:38.088153029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:38.767475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e-rootfs.mount: Deactivated successfully. Sep 4 17:37:38.821021 kubelet[2689]: E0904 17:37:38.820967 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:38.823154 containerd[1568]: time="2024-09-04T17:37:38.823105067Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:37:39.128052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062287057.mount: Deactivated successfully. Sep 4 17:37:39.135844 containerd[1568]: time="2024-09-04T17:37:39.135783089Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\"" Sep 4 17:37:39.136980 containerd[1568]: time="2024-09-04T17:37:39.136427534Z" level=info msg="StartContainer for \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\"" Sep 4 17:37:39.202646 containerd[1568]: time="2024-09-04T17:37:39.202575402Z" level=info msg="StartContainer for \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\" returns successfully" Sep 4 17:37:39.227971 containerd[1568]: time="2024-09-04T17:37:39.227886609Z" level=info msg="shim disconnected" id=0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143 namespace=k8s.io Sep 4 17:37:39.227971 containerd[1568]: time="2024-09-04T17:37:39.227950319Z" level=warning msg="cleaning up after shim disconnected" id=0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143 namespace=k8s.io Sep 4 17:37:39.227971 containerd[1568]: time="2024-09-04T17:37:39.227959897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:37:39.768997 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:35882.service - OpenSSH per-connection server daemon (10.0.0.1:35882). Sep 4 17:37:39.771551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143-rootfs.mount: Deactivated successfully. Sep 4 17:37:39.805706 sshd[3391]: Accepted publickey for core from 10.0.0.1 port 35882 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:37:39.807813 sshd[3391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:39.812204 systemd-logind[1549]: New session 8 of user core. Sep 4 17:37:39.820032 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:37:39.824976 kubelet[2689]: E0904 17:37:39.824949 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:39.828727 containerd[1568]: time="2024-09-04T17:37:39.828594093Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:37:39.845244 containerd[1568]: time="2024-09-04T17:37:39.845149851Z" level=info msg="CreateContainer within sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\"" Sep 4 17:37:39.846343 containerd[1568]: time="2024-09-04T17:37:39.845828961Z" level=info msg="StartContainer for \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\"" Sep 4 17:37:39.909890 containerd[1568]: time="2024-09-04T17:37:39.909666236Z" level=info msg="StartContainer for \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\" returns successfully" Sep 4 17:37:39.997469 sshd[3391]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:40.001545 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:35882.service: Deactivated successfully. Sep 4 17:37:40.003934 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:37:40.004551 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:37:40.005545 systemd-logind[1549]: Removed session 8. Sep 4 17:37:40.044139 kubelet[2689]: I0904 17:37:40.043995 2689 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:37:40.062001 kubelet[2689]: I0904 17:37:40.058798 2689 topology_manager.go:215] "Topology Admit Handler" podUID="9dc639db-f5ee-4977-907c-c9d72d871281" podNamespace="kube-system" podName="coredns-5dd5756b68-6ptjr" Sep 4 17:37:40.062001 kubelet[2689]: I0904 17:37:40.059081 2689 topology_manager.go:215] "Topology Admit Handler" podUID="ed0df374-0d10-4fb2-a1a7-8ebb2d82e736" podNamespace="kube-system" podName="coredns-5dd5756b68-pkbmw" Sep 4 17:37:40.158365 kubelet[2689]: I0904 17:37:40.158298 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc639db-f5ee-4977-907c-c9d72d871281-config-volume\") pod \"coredns-5dd5756b68-6ptjr\" (UID: \"9dc639db-f5ee-4977-907c-c9d72d871281\") " pod="kube-system/coredns-5dd5756b68-6ptjr" Sep 4 17:37:40.158365 kubelet[2689]: I0904 17:37:40.158360 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxnd\" (UniqueName: \"kubernetes.io/projected/ed0df374-0d10-4fb2-a1a7-8ebb2d82e736-kube-api-access-8hxnd\") pod \"coredns-5dd5756b68-pkbmw\" (UID: \"ed0df374-0d10-4fb2-a1a7-8ebb2d82e736\") " pod="kube-system/coredns-5dd5756b68-pkbmw" Sep 4 17:37:40.158558 kubelet[2689]: I0904 17:37:40.158395 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed0df374-0d10-4fb2-a1a7-8ebb2d82e736-config-volume\") pod \"coredns-5dd5756b68-pkbmw\" (UID: \"ed0df374-0d10-4fb2-a1a7-8ebb2d82e736\") " pod="kube-system/coredns-5dd5756b68-pkbmw" Sep 4 17:37:40.158558 kubelet[2689]: I0904 17:37:40.158422 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2lwc\" (UniqueName: \"kubernetes.io/projected/9dc639db-f5ee-4977-907c-c9d72d871281-kube-api-access-j2lwc\") pod \"coredns-5dd5756b68-6ptjr\" (UID: \"9dc639db-f5ee-4977-907c-c9d72d871281\") " pod="kube-system/coredns-5dd5756b68-6ptjr" Sep 4 17:37:40.368161 kubelet[2689]: E0904 17:37:40.368019 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:40.368879 containerd[1568]: time="2024-09-04T17:37:40.368745571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pkbmw,Uid:ed0df374-0d10-4fb2-a1a7-8ebb2d82e736,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:40.370648 kubelet[2689]: E0904 17:37:40.370593 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:40.371224 containerd[1568]: time="2024-09-04T17:37:40.371179526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6ptjr,Uid:9dc639db-f5ee-4977-907c-c9d72d871281,Namespace:kube-system,Attempt:0,}" Sep 4 17:37:40.831106 kubelet[2689]: E0904 17:37:40.830819 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:40.851860 kubelet[2689]: I0904 17:37:40.851440 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jgkpt" podStartSLOduration=5.649599746 podCreationTimestamp="2024-09-04 17:37:27 +0000 UTC" firstStartedPulling="2024-09-04 17:37:27.552873504 +0000 UTC m=+13.953090177" lastFinishedPulling="2024-09-04 17:37:35.754584215 +0000 UTC m=+22.154800888" observedRunningTime="2024-09-04 17:37:40.851276573 +0000 UTC m=+27.251493247" watchObservedRunningTime="2024-09-04 17:37:40.851310457 +0000 UTC m=+27.251527130" Sep 4 17:37:41.832454 kubelet[2689]: E0904 17:37:41.832419 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:42.185372 systemd-networkd[1233]: cilium_host: Link UP Sep 4 17:37:42.185540 systemd-networkd[1233]: cilium_net: Link UP Sep 4 17:37:42.185769 systemd-networkd[1233]: cilium_net: Gained carrier Sep 4 17:37:42.185957 systemd-networkd[1233]: cilium_host: Gained carrier Sep 4 17:37:42.307034 systemd-networkd[1233]: cilium_vxlan: Link UP Sep 4 17:37:42.307044 systemd-networkd[1233]: cilium_vxlan: Gained carrier Sep 4 17:37:42.540956 systemd-networkd[1233]: cilium_net: Gained IPv6LL Sep 4 17:37:42.556967 kernel: NET: Registered PF_ALG protocol family Sep 4 17:37:42.834457 kubelet[2689]: E0904 17:37:42.833412 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:43.020939 systemd-networkd[1233]: cilium_host: Gained IPv6LL Sep 4 17:37:43.247769 systemd-networkd[1233]: lxc_health: Link UP Sep 4 17:37:43.255921 systemd-networkd[1233]: lxc_health: Gained carrier Sep 4 17:37:43.724488 systemd-networkd[1233]: cilium_vxlan: Gained IPv6LL Sep 4 17:37:43.746611 systemd-networkd[1233]: lxcaf43574fb15a: Link UP Sep 4 17:37:43.754782 kernel: eth0: renamed from tmp7650c Sep 4 17:37:43.762048 systemd-networkd[1233]: lxcaf43574fb15a: Gained carrier Sep 4 17:37:43.780347 systemd-networkd[1233]: lxc23abc0f8fc41: Link UP Sep 4 17:37:43.780795 kernel: eth0: renamed from tmp5393a Sep 4 17:37:43.787621 systemd-networkd[1233]: lxc23abc0f8fc41: Gained carrier Sep 4 17:37:43.835808 kubelet[2689]: E0904 17:37:43.835714 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:45.012155 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:35886.service - OpenSSH per-connection server daemon (10.0.0.1:35886). Sep 4 17:37:45.050824 sshd[3929]: Accepted publickey for core from 10.0.0.1 port 35886 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:37:45.052612 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:45.065354 systemd-logind[1549]: New session 9 of user core. Sep 4 17:37:45.067880 systemd-networkd[1233]: lxc_health: Gained IPv6LL Sep 4 17:37:45.072141 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:37:45.197442 systemd-networkd[1233]: lxcaf43574fb15a: Gained IPv6LL Sep 4 17:37:45.260714 sshd[3929]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:45.265698 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:35886.service: Deactivated successfully. Sep 4 17:37:45.268716 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:37:45.270095 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:37:45.271947 systemd-logind[1549]: Removed session 9. Sep 4 17:37:45.388094 systemd-networkd[1233]: lxc23abc0f8fc41: Gained IPv6LL Sep 4 17:37:47.535444 containerd[1568]: time="2024-09-04T17:37:47.535270371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:47.536797 containerd[1568]: time="2024-09-04T17:37:47.536628295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:37:47.536797 containerd[1568]: time="2024-09-04T17:37:47.536689791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:47.536797 containerd[1568]: time="2024-09-04T17:37:47.536727331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:47.537080 containerd[1568]: time="2024-09-04T17:37:47.536903092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:47.537158 containerd[1568]: time="2024-09-04T17:37:47.536793766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:37:47.537158 containerd[1568]: time="2024-09-04T17:37:47.536834042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:47.537158 containerd[1568]: time="2024-09-04T17:37:47.536943989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:37:47.571668 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:37:47.572197 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:37:47.604976 containerd[1568]: time="2024-09-04T17:37:47.604904435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6ptjr,Uid:9dc639db-f5ee-4977-907c-c9d72d871281,Namespace:kube-system,Attempt:0,} returns sandbox id \"5393a16b2549a0827f847731afc2f81aa6fa2aed4be358ce6f4491af2b62378c\"" Sep 4 17:37:47.605652 containerd[1568]: time="2024-09-04T17:37:47.605626484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pkbmw,Uid:ed0df374-0d10-4fb2-a1a7-8ebb2d82e736,Namespace:kube-system,Attempt:0,} returns sandbox id \"7650c9e18c523493c33090cbeb3114d5c2adc018b23eb62e8c1398b2ef3451b3\"" Sep 4 17:37:47.606088 kubelet[2689]: E0904 17:37:47.606060 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:47.607837 kubelet[2689]: E0904 17:37:47.607820 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:47.608598 containerd[1568]: time="2024-09-04T17:37:47.608532621Z" level=info msg="CreateContainer within sandbox \"5393a16b2549a0827f847731afc2f81aa6fa2aed4be358ce6f4491af2b62378c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:37:47.609638 containerd[1568]: time="2024-09-04T17:37:47.609595731Z" level=info msg="CreateContainer within sandbox \"7650c9e18c523493c33090cbeb3114d5c2adc018b23eb62e8c1398b2ef3451b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:37:47.634573 containerd[1568]: time="2024-09-04T17:37:47.634511686Z" level=info msg="CreateContainer within sandbox \"7650c9e18c523493c33090cbeb3114d5c2adc018b23eb62e8c1398b2ef3451b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"966fcd3220f853a31b4fc11da19771f389f6cb12bc1280d948db65450570f28b\"" Sep 4 17:37:47.635286 containerd[1568]: time="2024-09-04T17:37:47.635247050Z" level=info msg="StartContainer for \"966fcd3220f853a31b4fc11da19771f389f6cb12bc1280d948db65450570f28b\"" Sep 4 17:37:47.638949 containerd[1568]: time="2024-09-04T17:37:47.638892117Z" level=info msg="CreateContainer within sandbox \"5393a16b2549a0827f847731afc2f81aa6fa2aed4be358ce6f4491af2b62378c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b14d0e1eca6191f433bea0dddb7d301f564a70936f6f43deefd5c40886af416\"" Sep 4 17:37:47.640005 containerd[1568]: time="2024-09-04T17:37:47.639974724Z" level=info msg="StartContainer for \"6b14d0e1eca6191f433bea0dddb7d301f564a70936f6f43deefd5c40886af416\"" Sep 4 17:37:47.689538 containerd[1568]: time="2024-09-04T17:37:47.689475471Z" level=info msg="StartContainer for \"966fcd3220f853a31b4fc11da19771f389f6cb12bc1280d948db65450570f28b\" returns successfully" Sep 4 17:37:47.696037 containerd[1568]: time="2024-09-04T17:37:47.696001047Z" level=info msg="StartContainer for \"6b14d0e1eca6191f433bea0dddb7d301f564a70936f6f43deefd5c40886af416\" returns successfully" Sep 4 17:37:47.843897 kubelet[2689]: E0904 17:37:47.843731 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:47.847155 kubelet[2689]: E0904 17:37:47.846828 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:47.869819 kubelet[2689]: I0904 17:37:47.869441 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6ptjr" podStartSLOduration=21.869354694 podCreationTimestamp="2024-09-04 17:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:47.868891813 +0000 UTC m=+34.269108496" watchObservedRunningTime="2024-09-04 17:37:47.869354694 +0000 UTC m=+34.269571367" Sep 4 17:37:48.010532 kubelet[2689]: I0904 17:37:48.010474 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pkbmw" podStartSLOduration=22.010427435 podCreationTimestamp="2024-09-04 17:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:37:48.010290959 +0000 UTC m=+34.410507632" watchObservedRunningTime="2024-09-04 17:37:48.010427435 +0000 UTC m=+34.410644108" Sep 4 17:37:48.850272 kubelet[2689]: E0904 17:37:48.849957 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:48.850272 kubelet[2689]: E0904 17:37:48.850136 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:49.853735 kubelet[2689]: E0904 17:37:49.853680 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:49.853735 kubelet[2689]: E0904 17:37:49.853771 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:50.272112 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:39224.service - OpenSSH per-connection server daemon (10.0.0.1:39224). Sep 4 17:37:50.310936 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 39224 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:37:50.312914 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:50.317600 systemd-logind[1549]: New session 10 of user core. Sep 4 17:37:50.326185 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:37:50.568491 sshd[4121]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:50.573308 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:39224.service: Deactivated successfully. Sep 4 17:37:50.576525 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:37:50.576867 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:37:50.578565 systemd-logind[1549]: Removed session 10. Sep 4 17:37:53.701165 kubelet[2689]: I0904 17:37:53.701105 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:37:53.702040 kubelet[2689]: E0904 17:37:53.702008 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:53.861869 kubelet[2689]: E0904 17:37:53.861829 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:37:55.578982 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:39236.service - OpenSSH per-connection server daemon (10.0.0.1:39236). Sep 4 17:37:55.609227 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 39236 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:37:55.610731 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:55.614820 systemd-logind[1549]: New session 11 of user core. Sep 4 17:37:55.624007 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:37:55.751533 sshd[4137]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:55.758056 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:39252.service - OpenSSH per-connection server daemon (10.0.0.1:39252). Sep 4 17:37:55.758663 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:39236.service: Deactivated successfully. Sep 4 17:37:55.763361 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:37:55.764556 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:37:55.767019 systemd-logind[1549]: Removed session 11. Sep 4 17:37:55.792604 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 39252 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:37:55.795145 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:55.802519 systemd-logind[1549]: New session 12 of user core. Sep 4 17:37:55.810850 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:37:56.781876 sshd[4150]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:56.790039 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:51360.service - OpenSSH per-connection server daemon (10.0.0.1:51360). Sep 4 17:37:56.790700 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:39252.service: Deactivated successfully. Sep 4 17:37:56.795088 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:37:56.796275 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:37:56.797324 systemd-logind[1549]: Removed session 12. Sep 4 17:37:56.827736 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 51360 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:37:56.830042 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:56.835099 systemd-logind[1549]: New session 13 of user core. Sep 4 17:37:56.846190 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:37:56.991608 sshd[4163]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:56.995866 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:51360.service: Deactivated successfully. Sep 4 17:37:56.998386 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:37:56.999172 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:37:57.000258 systemd-logind[1549]: Removed session 13. Sep 4 17:38:02.002020 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:51364.service - OpenSSH per-connection server daemon (10.0.0.1:51364). Sep 4 17:38:02.032728 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 51364 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:02.034348 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:02.038747 systemd-logind[1549]: New session 14 of user core. Sep 4 17:38:02.056195 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:38:02.172533 sshd[4186]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:02.177249 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:51364.service: Deactivated successfully. Sep 4 17:38:02.180439 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:38:02.181484 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:38:02.182296 systemd-logind[1549]: Removed session 14. Sep 4 17:38:07.182967 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:49448.service - OpenSSH per-connection server daemon (10.0.0.1:49448). Sep 4 17:38:07.213825 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 49448 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:07.215490 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:07.219597 systemd-logind[1549]: New session 15 of user core. Sep 4 17:38:07.230029 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:38:07.355279 sshd[4201]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:07.360127 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:49448.service: Deactivated successfully. Sep 4 17:38:07.362714 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:38:07.363823 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:38:07.364983 systemd-logind[1549]: Removed session 15. Sep 4 17:38:12.366969 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:49464.service - OpenSSH per-connection server daemon (10.0.0.1:49464). Sep 4 17:38:12.397663 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 49464 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:12.399402 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:12.403306 systemd-logind[1549]: New session 16 of user core. Sep 4 17:38:12.416019 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:38:12.524803 sshd[4216]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:12.532234 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:49468.service - OpenSSH per-connection server daemon (10.0.0.1:49468). Sep 4 17:38:12.532983 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:49464.service: Deactivated successfully. Sep 4 17:38:12.535769 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:38:12.537657 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:38:12.538790 systemd-logind[1549]: Removed session 16. Sep 4 17:38:12.565246 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 49468 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:12.567056 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:12.571664 systemd-logind[1549]: New session 17 of user core. Sep 4 17:38:12.579089 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:38:13.357191 sshd[4229]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:13.370203 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:49472.service - OpenSSH per-connection server daemon (10.0.0.1:49472). Sep 4 17:38:13.370901 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:49468.service: Deactivated successfully. Sep 4 17:38:13.375485 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:38:13.376986 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:38:13.378065 systemd-logind[1549]: Removed session 17. Sep 4 17:38:13.408306 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 49472 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:13.410139 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:13.414970 systemd-logind[1549]: New session 18 of user core. Sep 4 17:38:13.425214 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:38:14.405288 sshd[4242]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:14.413906 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:49476.service - OpenSSH per-connection server daemon (10.0.0.1:49476). Sep 4 17:38:14.418058 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:49472.service: Deactivated successfully. Sep 4 17:38:14.427448 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:38:14.428924 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:38:14.430223 systemd-logind[1549]: Removed session 18. Sep 4 17:38:14.461380 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 49476 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:14.463788 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:14.469363 systemd-logind[1549]: New session 19 of user core. Sep 4 17:38:14.478021 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:38:14.768904 sshd[4269]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:14.777177 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:49480.service - OpenSSH per-connection server daemon (10.0.0.1:49480). Sep 4 17:38:14.777785 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:49476.service: Deactivated successfully. Sep 4 17:38:14.780790 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:38:14.782709 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:38:14.783896 systemd-logind[1549]: Removed session 19. Sep 4 17:38:14.812225 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 49480 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:14.814132 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:14.818403 systemd-logind[1549]: New session 20 of user core. Sep 4 17:38:14.825026 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:38:15.004890 sshd[4282]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:15.009141 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:49480.service: Deactivated successfully. Sep 4 17:38:15.011843 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:38:15.012690 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:38:15.013648 systemd-logind[1549]: Removed session 20. Sep 4 17:38:20.014002 systemd[1]: Started sshd@20-10.0.0.18:22-10.0.0.1:38572.service - OpenSSH per-connection server daemon (10.0.0.1:38572). Sep 4 17:38:20.044441 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 38572 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:20.045935 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:20.049575 systemd-logind[1549]: New session 21 of user core. Sep 4 17:38:20.060044 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:38:20.210347 sshd[4300]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:20.214323 systemd[1]: sshd@20-10.0.0.18:22-10.0.0.1:38572.service: Deactivated successfully. Sep 4 17:38:20.217191 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:38:20.217279 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:38:20.218556 systemd-logind[1549]: Removed session 21. Sep 4 17:38:25.227125 systemd[1]: Started sshd@21-10.0.0.18:22-10.0.0.1:38580.service - OpenSSH per-connection server daemon (10.0.0.1:38580). Sep 4 17:38:25.258406 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 38580 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:25.260054 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:25.263990 systemd-logind[1549]: New session 22 of user core. Sep 4 17:38:25.274111 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:38:25.445692 sshd[4318]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:25.451053 systemd[1]: sshd@21-10.0.0.18:22-10.0.0.1:38580.service: Deactivated successfully. Sep 4 17:38:25.453790 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:38:25.453857 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:38:25.455123 systemd-logind[1549]: Removed session 22. Sep 4 17:38:30.458978 systemd[1]: Started sshd@22-10.0.0.18:22-10.0.0.1:35012.service - OpenSSH per-connection server daemon (10.0.0.1:35012). Sep 4 17:38:30.491351 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 35012 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:30.493237 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:30.497471 systemd-logind[1549]: New session 23 of user core. Sep 4 17:38:30.507011 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:38:30.615786 sshd[4335]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:30.619791 systemd[1]: sshd@22-10.0.0.18:22-10.0.0.1:35012.service: Deactivated successfully. Sep 4 17:38:30.622414 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:38:30.623075 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:38:30.623984 systemd-logind[1549]: Removed session 23. Sep 4 17:38:35.628079 systemd[1]: Started sshd@23-10.0.0.18:22-10.0.0.1:35014.service - OpenSSH per-connection server daemon (10.0.0.1:35014). Sep 4 17:38:35.659641 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 35014 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:35.661637 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:35.667243 systemd-logind[1549]: New session 24 of user core. Sep 4 17:38:35.678041 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:38:35.783221 sshd[4350]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:35.791004 systemd[1]: Started sshd@24-10.0.0.18:22-10.0.0.1:35024.service - OpenSSH per-connection server daemon (10.0.0.1:35024). Sep 4 17:38:35.791550 systemd[1]: sshd@23-10.0.0.18:22-10.0.0.1:35014.service: Deactivated successfully. Sep 4 17:38:35.796037 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:38:35.797295 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:38:35.798476 systemd-logind[1549]: Removed session 24. Sep 4 17:38:35.823956 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 35024 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:35.825708 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:35.830907 systemd-logind[1549]: New session 25 of user core. Sep 4 17:38:35.841036 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:38:37.348298 containerd[1568]: time="2024-09-04T17:38:37.348217103Z" level=info msg="StopContainer for \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\" with timeout 30 (s)" Sep 4 17:38:37.348988 containerd[1568]: time="2024-09-04T17:38:37.348686940Z" level=info msg="Stop container \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\" with signal terminated" Sep 4 17:38:37.395094 containerd[1568]: time="2024-09-04T17:38:37.394976680Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:38:37.396468 containerd[1568]: time="2024-09-04T17:38:37.396402012Z" level=info msg="StopContainer for \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\" with timeout 2 (s)" Sep 4 17:38:37.396723 containerd[1568]: time="2024-09-04T17:38:37.396681927Z" level=info msg="Stop container \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\" with signal terminated" Sep 4 17:38:37.402121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44-rootfs.mount: Deactivated successfully. Sep 4 17:38:37.405360 systemd-networkd[1233]: lxc_health: Link DOWN Sep 4 17:38:37.405371 systemd-networkd[1233]: lxc_health: Lost carrier Sep 4 17:38:37.413405 containerd[1568]: time="2024-09-04T17:38:37.411940272Z" level=info msg="shim disconnected" id=c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44 namespace=k8s.io Sep 4 17:38:37.413405 containerd[1568]: time="2024-09-04T17:38:37.412014364Z" level=warning msg="cleaning up after shim disconnected" id=c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44 namespace=k8s.io Sep 4 17:38:37.413405 containerd[1568]: time="2024-09-04T17:38:37.412025725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:37.483616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f-rootfs.mount: Deactivated successfully. Sep 4 17:38:37.487522 containerd[1568]: time="2024-09-04T17:38:37.487473010Z" level=info msg="StopContainer for \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\" returns successfully" Sep 4 17:38:37.488318 containerd[1568]: time="2024-09-04T17:38:37.488283818Z" level=info msg="StopPodSandbox for \"f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531\"" Sep 4 17:38:37.488384 containerd[1568]: time="2024-09-04T17:38:37.488327481Z" level=info msg="Container to stop \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:38:37.491598 containerd[1568]: time="2024-09-04T17:38:37.491053578Z" level=info msg="shim disconnected" id=b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f namespace=k8s.io Sep 4 17:38:37.491598 containerd[1568]: time="2024-09-04T17:38:37.491099066Z" level=warning msg="cleaning up after shim disconnected" id=b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f namespace=k8s.io Sep 4 17:38:37.491598 containerd[1568]: time="2024-09-04T17:38:37.491109175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:37.491266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531-shm.mount: Deactivated successfully. Sep 4 17:38:37.510742 containerd[1568]: time="2024-09-04T17:38:37.510685178Z" level=info msg="StopContainer for \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\" returns successfully" Sep 4 17:38:37.511979 containerd[1568]: time="2024-09-04T17:38:37.511947178Z" level=info msg="StopPodSandbox for \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\"" Sep 4 17:38:37.512070 containerd[1568]: time="2024-09-04T17:38:37.511988508Z" level=info msg="Container to stop \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:38:37.512070 containerd[1568]: time="2024-09-04T17:38:37.512006221Z" level=info msg="Container to stop \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:38:37.512070 containerd[1568]: time="2024-09-04T17:38:37.512018775Z" level=info msg="Container to stop \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:38:37.512070 containerd[1568]: time="2024-09-04T17:38:37.512031149Z" level=info msg="Container to stop \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:38:37.512070 containerd[1568]: time="2024-09-04T17:38:37.512043633Z" level=info msg="Container to stop \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:38:37.515907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3-shm.mount: Deactivated successfully. Sep 4 17:38:37.525989 containerd[1568]: time="2024-09-04T17:38:37.525919958Z" level=info msg="shim disconnected" id=f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531 namespace=k8s.io Sep 4 17:38:37.525989 containerd[1568]: time="2024-09-04T17:38:37.525980914Z" level=warning msg="cleaning up after shim disconnected" id=f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531 namespace=k8s.io Sep 4 17:38:37.525989 containerd[1568]: time="2024-09-04T17:38:37.525992357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:37.544652 containerd[1568]: time="2024-09-04T17:38:37.544594609Z" level=info msg="TearDown network for sandbox \"f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531\" successfully" Sep 4 17:38:37.544652 containerd[1568]: time="2024-09-04T17:38:37.544633293Z" level=info msg="StopPodSandbox for \"f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531\" returns successfully" Sep 4 17:38:37.550863 containerd[1568]: time="2024-09-04T17:38:37.550667089Z" level=info msg="shim disconnected" id=166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3 namespace=k8s.io Sep 4 17:38:37.550863 containerd[1568]: time="2024-09-04T17:38:37.550726762Z" level=warning msg="cleaning up after shim disconnected" id=166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3 namespace=k8s.io Sep 4 17:38:37.550863 containerd[1568]: time="2024-09-04T17:38:37.550739026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:37.572320 containerd[1568]: time="2024-09-04T17:38:37.572260023Z" level=info msg="TearDown network for sandbox \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" successfully" Sep 4 17:38:37.572320 containerd[1568]: time="2024-09-04T17:38:37.572304148Z" level=info msg="StopPodSandbox for \"166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3\" returns successfully" Sep 4 17:38:37.599787 kubelet[2689]: I0904 17:38:37.599613 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-run\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.599787 kubelet[2689]: I0904 17:38:37.599660 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-config-path\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.599787 kubelet[2689]: I0904 17:38:37.599686 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-hubble-tls\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.599787 kubelet[2689]: I0904 17:38:37.599712 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-lib-modules\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.599787 kubelet[2689]: I0904 17:38:37.599740 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.599787 kubelet[2689]: I0904 17:38:37.599773 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-host-proc-sys-net\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600535 kubelet[2689]: I0904 17:38:37.599795 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.600535 kubelet[2689]: I0904 17:38:37.599822 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-bpf-maps\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600535 kubelet[2689]: I0904 17:38:37.599821 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.600535 kubelet[2689]: I0904 17:38:37.599851 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fzlh\" (UniqueName: \"kubernetes.io/projected/26c0f59f-4184-4fb6-8a4c-cb4f9f354979-kube-api-access-9fzlh\") pod \"26c0f59f-4184-4fb6-8a4c-cb4f9f354979\" (UID: \"26c0f59f-4184-4fb6-8a4c-cb4f9f354979\") " Sep 4 17:38:37.600535 kubelet[2689]: I0904 17:38:37.599874 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-host-proc-sys-kernel\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600710 kubelet[2689]: I0904 17:38:37.599894 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cni-path\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600710 kubelet[2689]: I0904 17:38:37.599956 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-clustermesh-secrets\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600710 kubelet[2689]: I0904 17:38:37.599984 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-cgroup\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600710 kubelet[2689]: I0904 17:38:37.600006 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-xtables-lock\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600710 kubelet[2689]: I0904 17:38:37.600029 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-etc-cni-netd\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600710 kubelet[2689]: I0904 17:38:37.600054 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26c0f59f-4184-4fb6-8a4c-cb4f9f354979-cilium-config-path\") pod \"26c0f59f-4184-4fb6-8a4c-cb4f9f354979\" (UID: \"26c0f59f-4184-4fb6-8a4c-cb4f9f354979\") " Sep 4 17:38:37.600886 kubelet[2689]: I0904 17:38:37.600075 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-hostproc\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600886 kubelet[2689]: I0904 17:38:37.600102 2689 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z96wd\" (UniqueName: \"kubernetes.io/projected/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-kube-api-access-z96wd\") pod \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\" (UID: \"7f6d81b0-6e69-4402-ae88-e5a020af4b7c\") " Sep 4 17:38:37.600886 kubelet[2689]: I0904 17:38:37.600135 2689 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.600886 kubelet[2689]: I0904 17:38:37.600150 2689 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.600886 kubelet[2689]: I0904 17:38:37.600163 2689 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.600886 kubelet[2689]: I0904 17:38:37.600457 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.601034 kubelet[2689]: I0904 17:38:37.600493 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.601034 kubelet[2689]: I0904 17:38:37.600546 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.601034 kubelet[2689]: I0904 17:38:37.600570 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cni-path" (OuterVolumeSpecName: "cni-path") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.602324 kubelet[2689]: I0904 17:38:37.602294 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.602396 kubelet[2689]: I0904 17:38:37.602334 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.604363 kubelet[2689]: I0904 17:38:37.603860 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-hostproc" (OuterVolumeSpecName: "hostproc") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:38:37.604476 kubelet[2689]: I0904 17:38:37.604446 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c0f59f-4184-4fb6-8a4c-cb4f9f354979-kube-api-access-9fzlh" (OuterVolumeSpecName: "kube-api-access-9fzlh") pod "26c0f59f-4184-4fb6-8a4c-cb4f9f354979" (UID: "26c0f59f-4184-4fb6-8a4c-cb4f9f354979"). InnerVolumeSpecName "kube-api-access-9fzlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:38:37.605039 kubelet[2689]: I0904 17:38:37.605004 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-kube-api-access-z96wd" (OuterVolumeSpecName: "kube-api-access-z96wd") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "kube-api-access-z96wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:38:37.606823 kubelet[2689]: I0904 17:38:37.606795 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c0f59f-4184-4fb6-8a4c-cb4f9f354979-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26c0f59f-4184-4fb6-8a4c-cb4f9f354979" (UID: "26c0f59f-4184-4fb6-8a4c-cb4f9f354979"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:38:37.606889 kubelet[2689]: I0904 17:38:37.606857 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:38:37.607708 kubelet[2689]: I0904 17:38:37.607676 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:38:37.609087 kubelet[2689]: I0904 17:38:37.609056 2689 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f6d81b0-6e69-4402-ae88-e5a020af4b7c" (UID: "7f6d81b0-6e69-4402-ae88-e5a020af4b7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:38:37.701144 kubelet[2689]: I0904 17:38:37.701090 2689 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z96wd\" (UniqueName: \"kubernetes.io/projected/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-kube-api-access-z96wd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701144 kubelet[2689]: I0904 17:38:37.701123 2689 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701144 kubelet[2689]: I0904 17:38:37.701138 2689 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701144 kubelet[2689]: I0904 17:38:37.701151 2689 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701144 kubelet[2689]: I0904 17:38:37.701163 2689 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701404 kubelet[2689]: I0904 17:38:37.701175 2689 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9fzlh\" (UniqueName: \"kubernetes.io/projected/26c0f59f-4184-4fb6-8a4c-cb4f9f354979-kube-api-access-9fzlh\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701404 kubelet[2689]: I0904 17:38:37.701186 2689 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701404 kubelet[2689]: I0904 17:38:37.701199 2689 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701404 kubelet[2689]: I0904 17:38:37.701211 2689 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701404 kubelet[2689]: I0904 17:38:37.701223 2689 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701404 kubelet[2689]: I0904 17:38:37.701234 2689 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26c0f59f-4184-4fb6-8a4c-cb4f9f354979-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701404 kubelet[2689]: I0904 17:38:37.701256 2689 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.701404 kubelet[2689]: I0904 17:38:37.701269 2689 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f6d81b0-6e69-4402-ae88-e5a020af4b7c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:38:37.945492 kubelet[2689]: I0904 17:38:37.945450 2689 scope.go:117] "RemoveContainer" containerID="b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f" Sep 4 17:38:37.947128 containerd[1568]: time="2024-09-04T17:38:37.946530830Z" level=info msg="RemoveContainer for \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\"" Sep 4 17:38:38.089727 containerd[1568]: time="2024-09-04T17:38:38.089667997Z" level=info msg="RemoveContainer for \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\" returns successfully" Sep 4 17:38:38.090124 kubelet[2689]: I0904 17:38:38.090085 2689 scope.go:117] "RemoveContainer" containerID="0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143" Sep 4 17:38:38.092053 containerd[1568]: time="2024-09-04T17:38:38.092011883Z" level=info msg="RemoveContainer for \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\"" Sep 4 17:38:38.095628 containerd[1568]: time="2024-09-04T17:38:38.095597028Z" level=info msg="RemoveContainer for \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\" returns successfully" Sep 4 17:38:38.095818 kubelet[2689]: I0904 17:38:38.095782 2689 scope.go:117] "RemoveContainer" containerID="29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e" Sep 4 17:38:38.096742 containerd[1568]: time="2024-09-04T17:38:38.096715313Z" level=info msg="RemoveContainer for \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\"" Sep 4 17:38:38.099951 containerd[1568]: time="2024-09-04T17:38:38.099921103Z" level=info msg="RemoveContainer for \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\" returns successfully" Sep 4 17:38:38.100093 kubelet[2689]: I0904 17:38:38.100058 2689 scope.go:117] "RemoveContainer" containerID="20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003" Sep 4 17:38:38.100985 containerd[1568]: time="2024-09-04T17:38:38.100947674Z" level=info msg="RemoveContainer for \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\"" Sep 4 17:38:38.104135 containerd[1568]: time="2024-09-04T17:38:38.104098519Z" level=info msg="RemoveContainer for \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\" returns successfully" Sep 4 17:38:38.104289 kubelet[2689]: I0904 17:38:38.104267 2689 scope.go:117] "RemoveContainer" containerID="60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5" Sep 4 17:38:38.105173 containerd[1568]: time="2024-09-04T17:38:38.105140058Z" level=info msg="RemoveContainer for \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\"" Sep 4 17:38:38.108111 containerd[1568]: time="2024-09-04T17:38:38.108083357Z" level=info msg="RemoveContainer for \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\" returns successfully" Sep 4 17:38:38.108245 kubelet[2689]: I0904 17:38:38.108222 2689 scope.go:117] "RemoveContainer" containerID="b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f" Sep 4 17:38:38.108452 containerd[1568]: time="2024-09-04T17:38:38.108415522Z" level=error msg="ContainerStatus for \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\": not found" Sep 4 17:38:38.118256 kubelet[2689]: E0904 17:38:38.118222 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\": not found" containerID="b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f" Sep 4 17:38:38.118354 kubelet[2689]: I0904 17:38:38.118337 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f"} err="failed to get container status \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b084658aa53567b878f9aa5aa2939faef4d0b1107a1bf736d8c4273bd244754f\": not found" Sep 4 17:38:38.118393 kubelet[2689]: I0904 17:38:38.118357 2689 scope.go:117] "RemoveContainer" containerID="0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143" Sep 4 17:38:38.118590 containerd[1568]: time="2024-09-04T17:38:38.118538360Z" level=error msg="ContainerStatus for \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\": not found" Sep 4 17:38:38.118685 kubelet[2689]: E0904 17:38:38.118670 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\": not found" containerID="0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143" Sep 4 17:38:38.118725 kubelet[2689]: I0904 17:38:38.118707 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143"} err="failed to get container status \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cd5d18e5548ed1c745d0efbf8493ad37a3bba857c948e9bdc680f1eecb2b143\": not found" Sep 4 17:38:38.118725 kubelet[2689]: I0904 17:38:38.118719 2689 scope.go:117] "RemoveContainer" containerID="29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e" Sep 4 17:38:38.118930 containerd[1568]: time="2024-09-04T17:38:38.118900390Z" level=error msg="ContainerStatus for \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\": not found" Sep 4 17:38:38.119010 kubelet[2689]: E0904 17:38:38.118995 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\": not found" containerID="29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e" Sep 4 17:38:38.119039 kubelet[2689]: I0904 17:38:38.119014 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e"} err="failed to get container status \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\": rpc error: code = NotFound desc = an error occurred when try to find container \"29ebb8c5db5567de3e50034ab6330ac579a662f112ba9e3d4cf2afa21be8621e\": not found" Sep 4 17:38:38.119039 kubelet[2689]: I0904 17:38:38.119022 2689 scope.go:117] "RemoveContainer" containerID="20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003" Sep 4 17:38:38.119194 containerd[1568]: time="2024-09-04T17:38:38.119164645Z" level=error msg="ContainerStatus for \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\": not found" Sep 4 17:38:38.119284 kubelet[2689]: E0904 17:38:38.119269 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\": not found" containerID="20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003" Sep 4 17:38:38.119315 kubelet[2689]: I0904 17:38:38.119288 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003"} err="failed to get container status \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\": rpc error: code = NotFound desc = an error occurred when try to find container \"20988220539f5f5b6b69b6c63794568a73bac724c85ceb712e0680dc08466003\": not found" Sep 4 17:38:38.119315 kubelet[2689]: I0904 17:38:38.119296 2689 scope.go:117] "RemoveContainer" containerID="60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5" Sep 4 17:38:38.119446 containerd[1568]: time="2024-09-04T17:38:38.119418199Z" level=error msg="ContainerStatus for \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\": not found" Sep 4 17:38:38.119544 kubelet[2689]: E0904 17:38:38.119527 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\": not found" containerID="60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5" Sep 4 17:38:38.119582 kubelet[2689]: I0904 17:38:38.119549 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5"} err="failed to get container status \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\": rpc error: code = NotFound desc = an error occurred when try to find container \"60cd6ae78cde539c3b10aa0c5117d21de22e4131831d83d1a99adc9563d53cd5\": not found" Sep 4 17:38:38.119582 kubelet[2689]: I0904 17:38:38.119558 2689 scope.go:117] "RemoveContainer" containerID="c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44" Sep 4 17:38:38.120384 containerd[1568]: time="2024-09-04T17:38:38.120364856Z" level=info msg="RemoveContainer for \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\"" Sep 4 17:38:38.123214 containerd[1568]: time="2024-09-04T17:38:38.123184129Z" level=info msg="RemoveContainer for \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\" returns successfully" Sep 4 17:38:38.123348 kubelet[2689]: I0904 17:38:38.123327 2689 scope.go:117] "RemoveContainer" containerID="c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44" Sep 4 17:38:38.123555 containerd[1568]: time="2024-09-04T17:38:38.123517256Z" level=error msg="ContainerStatus for \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\": not found" Sep 4 17:38:38.123684 kubelet[2689]: E0904 17:38:38.123659 2689 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\": not found" containerID="c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44" Sep 4 17:38:38.123737 kubelet[2689]: I0904 17:38:38.123694 2689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44"} err="failed to get container status \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8999b48e5907b1763320e9471d8cd8b8165e879ee3456af0099f5db4b169c44\": not found" Sep 4 17:38:38.375073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-166f68acd81f0aef154cb5227d37058852fca3442bc4ae56be34d2fc0b6fe5f3-rootfs.mount: Deactivated successfully. Sep 4 17:38:38.375318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5fdebac422243c4d0bd978d3b3a575b55eb3ede5a0a3fc0a7889f7451c5e531-rootfs.mount: Deactivated successfully. Sep 4 17:38:38.375505 systemd[1]: var-lib-kubelet-pods-7f6d81b0\x2d6e69\x2d4402\x2dae88\x2de5a020af4b7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz96wd.mount: Deactivated successfully. Sep 4 17:38:38.375818 systemd[1]: var-lib-kubelet-pods-7f6d81b0\x2d6e69\x2d4402\x2dae88\x2de5a020af4b7c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:38:38.376197 systemd[1]: var-lib-kubelet-pods-7f6d81b0\x2d6e69\x2d4402\x2dae88\x2de5a020af4b7c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:38:38.376480 systemd[1]: var-lib-kubelet-pods-26c0f59f\x2d4184\x2d4fb6\x2d8a4c\x2dcb4f9f354979-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9fzlh.mount: Deactivated successfully. Sep 4 17:38:38.831676 kubelet[2689]: E0904 17:38:38.831632 2689 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:38:39.146956 sshd[4362]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:39.155974 systemd[1]: Started sshd@25-10.0.0.18:22-10.0.0.1:47626.service - OpenSSH per-connection server daemon (10.0.0.1:47626). Sep 4 17:38:39.156498 systemd[1]: sshd@24-10.0.0.18:22-10.0.0.1:35024.service: Deactivated successfully. Sep 4 17:38:39.159799 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:38:39.160638 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:38:39.162067 systemd-logind[1549]: Removed session 25. Sep 4 17:38:39.191351 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 47626 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:39.192921 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:39.197041 systemd-logind[1549]: New session 26 of user core. Sep 4 17:38:39.204981 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:38:39.620710 sshd[4530]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:39.630182 systemd[1]: Started sshd@26-10.0.0.18:22-10.0.0.1:47636.service - OpenSSH per-connection server daemon (10.0.0.1:47636). Sep 4 17:38:39.630744 systemd[1]: sshd@25-10.0.0.18:22-10.0.0.1:47626.service: Deactivated successfully. Sep 4 17:38:39.640298 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:38:39.645326 kubelet[2689]: I0904 17:38:39.645081 2689 topology_manager.go:215] "Topology Admit Handler" podUID="7cf41fcb-bd89-43c0-87cd-534b52be2a33" podNamespace="kube-system" podName="cilium-c9z9c" Sep 4 17:38:39.645326 kubelet[2689]: E0904 17:38:39.645162 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6d81b0-6e69-4402-ae88-e5a020af4b7c" containerName="mount-cgroup" Sep 4 17:38:39.645326 kubelet[2689]: E0904 17:38:39.645172 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6d81b0-6e69-4402-ae88-e5a020af4b7c" containerName="mount-bpf-fs" Sep 4 17:38:39.645326 kubelet[2689]: E0904 17:38:39.645179 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6d81b0-6e69-4402-ae88-e5a020af4b7c" containerName="cilium-agent" Sep 4 17:38:39.645326 kubelet[2689]: E0904 17:38:39.645186 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26c0f59f-4184-4fb6-8a4c-cb4f9f354979" containerName="cilium-operator" Sep 4 17:38:39.645326 kubelet[2689]: E0904 17:38:39.645194 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6d81b0-6e69-4402-ae88-e5a020af4b7c" containerName="apply-sysctl-overwrites" Sep 4 17:38:39.645326 kubelet[2689]: E0904 17:38:39.645200 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6d81b0-6e69-4402-ae88-e5a020af4b7c" containerName="clean-cilium-state" Sep 4 17:38:39.645326 kubelet[2689]: I0904 17:38:39.645222 2689 memory_manager.go:346] "RemoveStaleState removing state" podUID="26c0f59f-4184-4fb6-8a4c-cb4f9f354979" containerName="cilium-operator" Sep 4 17:38:39.645326 kubelet[2689]: I0904 17:38:39.645228 2689 memory_manager.go:346] "RemoveStaleState removing state" podUID="7f6d81b0-6e69-4402-ae88-e5a020af4b7c" containerName="cilium-agent" Sep 4 17:38:39.645915 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:38:39.653450 systemd-logind[1549]: Removed session 26. Sep 4 17:38:39.689646 sshd[4544]: Accepted publickey for core from 10.0.0.1 port 47636 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:39.691432 sshd[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:39.695650 systemd-logind[1549]: New session 27 of user core. Sep 4 17:38:39.706200 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:38:39.712915 kubelet[2689]: I0904 17:38:39.712863 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-etc-cni-netd\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.712915 kubelet[2689]: I0904 17:38:39.712910 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cf41fcb-bd89-43c0-87cd-534b52be2a33-clustermesh-secrets\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713014 kubelet[2689]: I0904 17:38:39.712942 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cf41fcb-bd89-43c0-87cd-534b52be2a33-cilium-config-path\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713014 kubelet[2689]: I0904 17:38:39.712971 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-bpf-maps\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713092 kubelet[2689]: I0904 17:38:39.713066 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-host-proc-sys-kernel\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713146 kubelet[2689]: I0904 17:38:39.713132 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxgvg\" (UniqueName: \"kubernetes.io/projected/7cf41fcb-bd89-43c0-87cd-534b52be2a33-kube-api-access-rxgvg\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713171 kubelet[2689]: I0904 17:38:39.713162 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-cilium-cgroup\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713233 kubelet[2689]: I0904 17:38:39.713210 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-xtables-lock\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713274 kubelet[2689]: I0904 17:38:39.713266 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cf41fcb-bd89-43c0-87cd-534b52be2a33-hubble-tls\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713313 kubelet[2689]: I0904 17:38:39.713303 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-host-proc-sys-net\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713344 kubelet[2689]: I0904 17:38:39.713336 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cf41fcb-bd89-43c0-87cd-534b52be2a33-cilium-ipsec-secrets\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713383 kubelet[2689]: I0904 17:38:39.713364 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-cni-path\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713407 kubelet[2689]: I0904 17:38:39.713401 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-lib-modules\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713451 kubelet[2689]: I0904 17:38:39.713429 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-cilium-run\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.713475 kubelet[2689]: I0904 17:38:39.713463 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cf41fcb-bd89-43c0-87cd-534b52be2a33-hostproc\") pod \"cilium-c9z9c\" (UID: \"7cf41fcb-bd89-43c0-87cd-534b52be2a33\") " pod="kube-system/cilium-c9z9c" Sep 4 17:38:39.714326 kubelet[2689]: I0904 17:38:39.714286 2689 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="26c0f59f-4184-4fb6-8a4c-cb4f9f354979" path="/var/lib/kubelet/pods/26c0f59f-4184-4fb6-8a4c-cb4f9f354979/volumes" Sep 4 17:38:39.714927 kubelet[2689]: I0904 17:38:39.714904 2689 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7f6d81b0-6e69-4402-ae88-e5a020af4b7c" path="/var/lib/kubelet/pods/7f6d81b0-6e69-4402-ae88-e5a020af4b7c/volumes" Sep 4 17:38:39.759281 sshd[4544]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:39.770006 systemd[1]: Started sshd@27-10.0.0.18:22-10.0.0.1:47638.service - OpenSSH per-connection server daemon (10.0.0.1:47638). Sep 4 17:38:39.770604 systemd[1]: sshd@26-10.0.0.18:22-10.0.0.1:47636.service: Deactivated successfully. Sep 4 17:38:39.773606 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:38:39.774669 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:38:39.775876 systemd-logind[1549]: Removed session 27. Sep 4 17:38:39.803413 sshd[4553]: Accepted publickey for core from 10.0.0.1 port 47638 ssh2: RSA SHA256:0NzOVulgWpYQ7XbqXCDIe/XA4mXr0x7YoOe5x+XZPcU Sep 4 17:38:39.805430 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:39.810215 systemd-logind[1549]: New session 28 of user core. Sep 4 17:38:39.816670 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:38:39.970596 kubelet[2689]: E0904 17:38:39.970555 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:39.972482 containerd[1568]: time="2024-09-04T17:38:39.972232877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c9z9c,Uid:7cf41fcb-bd89-43c0-87cd-534b52be2a33,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:40.006257 containerd[1568]: time="2024-09-04T17:38:40.006152217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:40.006257 containerd[1568]: time="2024-09-04T17:38:40.006219024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:40.006257 containerd[1568]: time="2024-09-04T17:38:40.006232680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:40.006424 containerd[1568]: time="2024-09-04T17:38:40.006362558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:40.041832 containerd[1568]: time="2024-09-04T17:38:40.041787297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c9z9c,Uid:7cf41fcb-bd89-43c0-87cd-534b52be2a33,Namespace:kube-system,Attempt:0,} returns sandbox id \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\"" Sep 4 17:38:40.042453 kubelet[2689]: E0904 17:38:40.042422 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:40.044607 containerd[1568]: time="2024-09-04T17:38:40.044573472Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:38:40.173368 containerd[1568]: time="2024-09-04T17:38:40.173290565Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9a416f73a1b68127459f0344bb83dac4198b478417ed59955302bf57c036b413\"" Sep 4 17:38:40.173996 containerd[1568]: time="2024-09-04T17:38:40.173956967Z" level=info msg="StartContainer for \"9a416f73a1b68127459f0344bb83dac4198b478417ed59955302bf57c036b413\"" Sep 4 17:38:40.231551 containerd[1568]: time="2024-09-04T17:38:40.231406393Z" level=info msg="StartContainer for \"9a416f73a1b68127459f0344bb83dac4198b478417ed59955302bf57c036b413\" returns successfully" Sep 4 17:38:40.278270 containerd[1568]: time="2024-09-04T17:38:40.278179181Z" level=info msg="shim disconnected" id=9a416f73a1b68127459f0344bb83dac4198b478417ed59955302bf57c036b413 namespace=k8s.io Sep 4 17:38:40.278270 containerd[1568]: time="2024-09-04T17:38:40.278244114Z" level=warning msg="cleaning up after shim disconnected" id=9a416f73a1b68127459f0344bb83dac4198b478417ed59955302bf57c036b413 namespace=k8s.io Sep 4 17:38:40.278270 containerd[1568]: time="2024-09-04T17:38:40.278255206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:40.965293 kubelet[2689]: E0904 17:38:40.965246 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:40.967061 containerd[1568]: time="2024-09-04T17:38:40.967002083Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:38:40.981113 containerd[1568]: time="2024-09-04T17:38:40.981055343Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f596ca6d618bdb6264e31951a39e8673924427ea80a4e619ef816e5f3c1422c7\"" Sep 4 17:38:40.982134 containerd[1568]: time="2024-09-04T17:38:40.981705223Z" level=info msg="StartContainer for \"f596ca6d618bdb6264e31951a39e8673924427ea80a4e619ef816e5f3c1422c7\"" Sep 4 17:38:41.032255 containerd[1568]: time="2024-09-04T17:38:41.032210858Z" level=info msg="StartContainer for \"f596ca6d618bdb6264e31951a39e8673924427ea80a4e619ef816e5f3c1422c7\" returns successfully" Sep 4 17:38:41.064469 containerd[1568]: time="2024-09-04T17:38:41.064408276Z" level=info msg="shim disconnected" id=f596ca6d618bdb6264e31951a39e8673924427ea80a4e619ef816e5f3c1422c7 namespace=k8s.io Sep 4 17:38:41.064469 containerd[1568]: time="2024-09-04T17:38:41.064458543Z" level=warning msg="cleaning up after shim disconnected" id=f596ca6d618bdb6264e31951a39e8673924427ea80a4e619ef816e5f3c1422c7 namespace=k8s.io Sep 4 17:38:41.064469 containerd[1568]: time="2024-09-04T17:38:41.064466447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:41.821746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f596ca6d618bdb6264e31951a39e8673924427ea80a4e619ef816e5f3c1422c7-rootfs.mount: Deactivated successfully. Sep 4 17:38:41.969240 kubelet[2689]: E0904 17:38:41.969195 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:41.972457 containerd[1568]: time="2024-09-04T17:38:41.972377392Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:38:42.013860 containerd[1568]: time="2024-09-04T17:38:42.013804942Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2cc96ef184459a7e4178b2d8b1e0065ac2c7f3306f1f1c870dfe4a176a27992\"" Sep 4 17:38:42.014702 containerd[1568]: time="2024-09-04T17:38:42.014661285Z" level=info msg="StartContainer for \"e2cc96ef184459a7e4178b2d8b1e0065ac2c7f3306f1f1c870dfe4a176a27992\"" Sep 4 17:38:42.077479 containerd[1568]: time="2024-09-04T17:38:42.077158085Z" level=info msg="StartContainer for \"e2cc96ef184459a7e4178b2d8b1e0065ac2c7f3306f1f1c870dfe4a176a27992\" returns successfully" Sep 4 17:38:42.101538 containerd[1568]: time="2024-09-04T17:38:42.101468456Z" level=info msg="shim disconnected" id=e2cc96ef184459a7e4178b2d8b1e0065ac2c7f3306f1f1c870dfe4a176a27992 namespace=k8s.io Sep 4 17:38:42.101538 containerd[1568]: time="2024-09-04T17:38:42.101532318Z" level=warning msg="cleaning up after shim disconnected" id=e2cc96ef184459a7e4178b2d8b1e0065ac2c7f3306f1f1c870dfe4a176a27992 namespace=k8s.io Sep 4 17:38:42.101538 containerd[1568]: time="2024-09-04T17:38:42.101543880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:42.115235 containerd[1568]: time="2024-09-04T17:38:42.115165585Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:38:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:38:42.821932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2cc96ef184459a7e4178b2d8b1e0065ac2c7f3306f1f1c870dfe4a176a27992-rootfs.mount: Deactivated successfully. Sep 4 17:38:42.973975 kubelet[2689]: E0904 17:38:42.973940 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:42.976229 containerd[1568]: time="2024-09-04T17:38:42.976166712Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:38:42.988862 containerd[1568]: time="2024-09-04T17:38:42.988810754Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2185d75f77de86f5a42c16d9f487abd70312c465dadfec210f9e357b32b1fe9\"" Sep 4 17:38:42.988938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453559233.mount: Deactivated successfully. Sep 4 17:38:42.989612 containerd[1568]: time="2024-09-04T17:38:42.989422721Z" level=info msg="StartContainer for \"d2185d75f77de86f5a42c16d9f487abd70312c465dadfec210f9e357b32b1fe9\"" Sep 4 17:38:43.044576 containerd[1568]: time="2024-09-04T17:38:43.044531536Z" level=info msg="StartContainer for \"d2185d75f77de86f5a42c16d9f487abd70312c465dadfec210f9e357b32b1fe9\" returns successfully" Sep 4 17:38:43.066732 containerd[1568]: time="2024-09-04T17:38:43.066660373Z" level=info msg="shim disconnected" id=d2185d75f77de86f5a42c16d9f487abd70312c465dadfec210f9e357b32b1fe9 namespace=k8s.io Sep 4 17:38:43.066732 containerd[1568]: time="2024-09-04T17:38:43.066717882Z" level=warning msg="cleaning up after shim disconnected" id=d2185d75f77de86f5a42c16d9f487abd70312c465dadfec210f9e357b32b1fe9 namespace=k8s.io Sep 4 17:38:43.066732 containerd[1568]: time="2024-09-04T17:38:43.066727230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:43.713171 kubelet[2689]: E0904 17:38:43.713126 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:43.821897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2185d75f77de86f5a42c16d9f487abd70312c465dadfec210f9e357b32b1fe9-rootfs.mount: Deactivated successfully. Sep 4 17:38:43.832741 kubelet[2689]: E0904 17:38:43.832690 2689 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:38:43.979459 kubelet[2689]: E0904 17:38:43.979324 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:43.981878 containerd[1568]: time="2024-09-04T17:38:43.981826313Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:38:43.997225 containerd[1568]: time="2024-09-04T17:38:43.997170731Z" level=info msg="CreateContainer within sandbox \"30ce53304370a53b30f790de211dc019ef0255e32abe5878b06dd812028a2b19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe7851b4d1a9567ed341d6ac8c3b4e698675c68a354e292e9ad35a07b152f982\"" Sep 4 17:38:43.997742 containerd[1568]: time="2024-09-04T17:38:43.997710199Z" level=info msg="StartContainer for \"fe7851b4d1a9567ed341d6ac8c3b4e698675c68a354e292e9ad35a07b152f982\"" Sep 4 17:38:44.069377 containerd[1568]: time="2024-09-04T17:38:44.069335773Z" level=info msg="StartContainer for \"fe7851b4d1a9567ed341d6ac8c3b4e698675c68a354e292e9ad35a07b152f982\" returns successfully" Sep 4 17:38:44.470775 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 17:38:44.712713 kubelet[2689]: E0904 17:38:44.712671 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:44.984974 kubelet[2689]: E0904 17:38:44.984943 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:45.344258 kubelet[2689]: I0904 17:38:45.344123 2689 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:38:45Z","lastTransitionTime":"2024-09-04T17:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:38:45.986670 kubelet[2689]: E0904 17:38:45.986613 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:47.620344 systemd-networkd[1233]: lxc_health: Link UP Sep 4 17:38:47.627875 systemd-networkd[1233]: lxc_health: Gained carrier Sep 4 17:38:47.973783 kubelet[2689]: E0904 17:38:47.972132 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:47.987326 kubelet[2689]: I0904 17:38:47.987270 2689 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-c9z9c" podStartSLOduration=8.987226784 podCreationTimestamp="2024-09-04 17:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:38:44.996583672 +0000 UTC m=+91.396800345" watchObservedRunningTime="2024-09-04 17:38:47.987226784 +0000 UTC m=+94.387443457" Sep 4 17:38:47.993132 kubelet[2689]: E0904 17:38:47.993085 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:48.340651 kubelet[2689]: E0904 17:38:48.340483 2689 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46594->127.0.0.1:46391: write tcp 127.0.0.1:46594->127.0.0.1:46391: write: broken pipe Sep 4 17:38:48.814832 systemd-networkd[1233]: lxc_health: Gained IPv6LL Sep 4 17:38:48.995222 kubelet[2689]: E0904 17:38:48.995183 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:50.712246 kubelet[2689]: E0904 17:38:50.712156 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:51.713661 kubelet[2689]: E0904 17:38:51.712959 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:54.638393 sshd[4553]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:54.643138 systemd[1]: sshd@27-10.0.0.18:22-10.0.0.1:47638.service: Deactivated successfully. Sep 4 17:38:54.645693 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:38:54.646578 systemd-logind[1549]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:38:54.647592 systemd-logind[1549]: Removed session 28. Sep 4 17:38:54.712485 kubelet[2689]: E0904 17:38:54.712442 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"