Aug 5 22:29:44.963235 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:29:44.963267 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:29:44.963284 kernel: BIOS-provided physical RAM map: Aug 5 22:29:44.963294 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 5 22:29:44.963315 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 5 22:29:44.963343 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 5 22:29:44.963364 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Aug 5 22:29:44.963375 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Aug 5 22:29:44.963385 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 5 22:29:44.963398 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 5 22:29:44.963408 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 5 22:29:44.963423 kernel: NX (Execute Disable) protection: active Aug 5 22:29:44.963433 kernel: APIC: Static calls initialized Aug 5 22:29:44.963464 kernel: SMBIOS 2.8 present. Aug 5 22:29:44.963477 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 5 22:29:44.963492 kernel: Hypervisor detected: KVM Aug 5 22:29:44.963502 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:29:44.963514 kernel: kvm-clock: using sched offset of 2330653898 cycles Aug 5 22:29:44.963525 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:29:44.963536 kernel: tsc: Detected 2794.748 MHz processor Aug 5 22:29:44.963547 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:29:44.963560 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:29:44.963571 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Aug 5 22:29:44.963582 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 5 22:29:44.963597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:29:44.963608 kernel: Using GB pages for direct mapping Aug 5 22:29:44.963619 kernel: ACPI: Early table checksum verification disabled Aug 5 22:29:44.963632 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Aug 5 22:29:44.963644 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:29:44.963655 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:29:44.963666 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:29:44.963677 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 5 22:29:44.963688 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:29:44.963702 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:29:44.963713 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:29:44.963725 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Aug 5 22:29:44.963736 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Aug 5 22:29:44.963747 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 5 22:29:44.963758 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Aug 5 22:29:44.963769 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Aug 5 22:29:44.963785 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Aug 5 22:29:44.963799 kernel: No NUMA configuration found Aug 5 22:29:44.963811 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Aug 5 22:29:44.963822 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Aug 5 22:29:44.963834 kernel: Zone ranges: Aug 5 22:29:44.963846 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:29:44.963857 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Aug 5 22:29:44.963872 kernel: Normal empty Aug 5 22:29:44.963883 kernel: Movable zone start for each node Aug 5 22:29:44.963895 kernel: Early memory node ranges Aug 5 22:29:44.963906 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 5 22:29:44.963918 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Aug 5 22:29:44.963929 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Aug 5 22:29:44.963941 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:29:44.963952 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 5 22:29:44.963964 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Aug 5 22:29:44.963978 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 5 22:29:44.963990 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:29:44.964001 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:29:44.964013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 5 22:29:44.964024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:29:44.964036 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:29:44.964048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:29:44.964059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:29:44.964071 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:29:44.964082 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:29:44.964097 kernel: TSC deadline timer available Aug 5 22:29:44.964109 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 5 22:29:44.964120 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:29:44.964132 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 5 22:29:44.964143 kernel: kvm-guest: setup PV sched yield Aug 5 22:29:44.964155 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Aug 5 22:29:44.964166 kernel: Booting paravirtualized kernel on KVM Aug 5 22:29:44.964178 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:29:44.964190 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 5 22:29:44.964206 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Aug 5 22:29:44.964217 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Aug 5 22:29:44.964229 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 5 22:29:44.964240 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:29:44.964252 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:29:44.964265 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:29:44.964278 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:29:44.964289 kernel: random: crng init done Aug 5 22:29:44.964304 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:29:44.964316 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:29:44.964327 kernel: Fallback order for Node 0: 0 Aug 5 22:29:44.964349 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Aug 5 22:29:44.964360 kernel: Policy zone: DMA32 Aug 5 22:29:44.964372 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:29:44.964384 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 143044K reserved, 0K cma-reserved) Aug 5 22:29:44.964396 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 22:29:44.964407 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:29:44.964423 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:29:44.964435 kernel: Dynamic Preempt: voluntary Aug 5 22:29:44.964470 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:29:44.964482 kernel: rcu: RCU event tracing is enabled. Aug 5 22:29:44.964494 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 22:29:44.964506 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:29:44.964518 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:29:44.964530 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:29:44.964541 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:29:44.964557 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 22:29:44.964568 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 5 22:29:44.964580 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:29:44.964592 kernel: Console: colour VGA+ 80x25 Aug 5 22:29:44.964603 kernel: printk: console [ttyS0] enabled Aug 5 22:29:44.964614 kernel: ACPI: Core revision 20230628 Aug 5 22:29:44.964626 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 5 22:29:44.964638 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:29:44.964649 kernel: x2apic enabled Aug 5 22:29:44.964664 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:29:44.964676 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 5 22:29:44.964688 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 5 22:29:44.964699 kernel: kvm-guest: setup PV IPIs Aug 5 22:29:44.964711 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 5 22:29:44.964722 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 5 22:29:44.964734 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Aug 5 22:29:44.964746 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 5 22:29:44.964772 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 5 22:29:44.964784 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 5 22:29:44.964796 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:29:44.964808 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:29:44.964824 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:29:44.964836 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:29:44.964849 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 5 22:29:44.964860 kernel: RETBleed: Mitigation: untrained return thunk Aug 5 22:29:44.964873 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 5 22:29:44.964889 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 5 22:29:44.964902 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 5 22:29:44.964914 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 5 22:29:44.964927 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 5 22:29:44.964939 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:29:44.964952 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:29:44.964964 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:29:44.964976 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:29:44.964992 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 5 22:29:44.965004 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:29:44.965016 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:29:44.965028 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:29:44.965041 kernel: SELinux: Initializing. Aug 5 22:29:44.965053 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:29:44.965065 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:29:44.965078 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 5 22:29:44.965090 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:29:44.965105 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:29:44.965118 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:29:44.965130 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 5 22:29:44.965142 kernel: ... version: 0 Aug 5 22:29:44.965154 kernel: ... bit width: 48 Aug 5 22:29:44.965167 kernel: ... generic registers: 6 Aug 5 22:29:44.965179 kernel: ... value mask: 0000ffffffffffff Aug 5 22:29:44.965191 kernel: ... max period: 00007fffffffffff Aug 5 22:29:44.965203 kernel: ... fixed-purpose events: 0 Aug 5 22:29:44.965218 kernel: ... event mask: 000000000000003f Aug 5 22:29:44.965231 kernel: signal: max sigframe size: 1776 Aug 5 22:29:44.965242 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:29:44.965255 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:29:44.965267 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:29:44.965279 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:29:44.965291 kernel: .... node #0, CPUs: #1 #2 #3 Aug 5 22:29:44.965304 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 22:29:44.965316 kernel: smpboot: Max logical packages: 1 Aug 5 22:29:44.965340 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Aug 5 22:29:44.965353 kernel: devtmpfs: initialized Aug 5 22:29:44.965365 kernel: x86/mm: Memory block size: 128MB Aug 5 22:29:44.965378 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:29:44.965390 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 22:29:44.965402 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:29:44.965415 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:29:44.965427 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:29:44.965454 kernel: audit: type=2000 audit(1722896983.488:1): state=initialized audit_enabled=0 res=1 Aug 5 22:29:44.965471 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:29:44.965483 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:29:44.965495 kernel: cpuidle: using governor menu Aug 5 22:29:44.965508 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:29:44.965520 kernel: dca service started, version 1.12.1 Aug 5 22:29:44.965532 kernel: PCI: Using configuration type 1 for base access Aug 5 22:29:44.965544 kernel: PCI: Using configuration type 1 for extended access Aug 5 22:29:44.965557 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:29:44.965569 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:29:44.965585 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:29:44.965597 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:29:44.965609 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:29:44.965622 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:29:44.965633 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:29:44.965647 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:29:44.965661 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:29:44.965674 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:29:44.965688 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:29:44.965706 kernel: ACPI: Interpreter enabled Aug 5 22:29:44.965719 kernel: ACPI: PM: (supports S0 S3 S5) Aug 5 22:29:44.965731 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:29:44.965743 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:29:44.965756 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:29:44.965768 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 5 22:29:44.965780 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:29:44.966026 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:29:44.966052 kernel: acpiphp: Slot [3] registered Aug 5 22:29:44.966065 kernel: acpiphp: Slot [4] registered Aug 5 22:29:44.966077 kernel: acpiphp: Slot [5] registered Aug 5 22:29:44.966090 kernel: acpiphp: Slot [6] registered Aug 5 22:29:44.966102 kernel: acpiphp: Slot [7] registered Aug 5 22:29:44.966114 kernel: acpiphp: Slot [8] registered Aug 5 22:29:44.966126 kernel: acpiphp: Slot [9] registered Aug 5 22:29:44.966138 kernel: acpiphp: Slot [10] registered Aug 5 22:29:44.966150 kernel: acpiphp: Slot [11] registered Aug 5 22:29:44.966163 kernel: acpiphp: Slot [12] registered Aug 5 22:29:44.966179 kernel: acpiphp: Slot [13] registered Aug 5 22:29:44.966190 kernel: acpiphp: Slot [14] registered Aug 5 22:29:44.966203 kernel: acpiphp: Slot [15] registered Aug 5 22:29:44.966215 kernel: acpiphp: Slot [16] registered Aug 5 22:29:44.966227 kernel: acpiphp: Slot [17] registered Aug 5 22:29:44.966239 kernel: acpiphp: Slot [18] registered Aug 5 22:29:44.966251 kernel: acpiphp: Slot [19] registered Aug 5 22:29:44.966263 kernel: acpiphp: Slot [20] registered Aug 5 22:29:44.966275 kernel: acpiphp: Slot [21] registered Aug 5 22:29:44.966291 kernel: acpiphp: Slot [22] registered Aug 5 22:29:44.966303 kernel: acpiphp: Slot [23] registered Aug 5 22:29:44.966315 kernel: acpiphp: Slot [24] registered Aug 5 22:29:44.966327 kernel: acpiphp: Slot [25] registered Aug 5 22:29:44.966350 kernel: acpiphp: Slot [26] registered Aug 5 22:29:44.966362 kernel: acpiphp: Slot [27] registered Aug 5 22:29:44.966374 kernel: acpiphp: Slot [28] registered Aug 5 22:29:44.966386 kernel: acpiphp: Slot [29] registered Aug 5 22:29:44.966399 kernel: acpiphp: Slot [30] registered Aug 5 22:29:44.966411 kernel: acpiphp: Slot [31] registered Aug 5 22:29:44.966427 kernel: PCI host bridge to bus 0000:00 Aug 5 22:29:44.966635 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:29:44.966801 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:29:44.966959 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:29:44.967117 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Aug 5 22:29:44.967273 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 5 22:29:44.967456 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:29:44.967661 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:29:44.967860 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:29:44.968048 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 5 22:29:44.968221 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Aug 5 22:29:44.968403 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 5 22:29:44.968596 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 5 22:29:44.968785 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 5 22:29:44.968958 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 5 22:29:44.969143 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 5 22:29:44.969319 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 5 22:29:44.969525 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 5 22:29:44.969716 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Aug 5 22:29:44.969895 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 5 22:29:44.970067 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 5 22:29:44.970242 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 5 22:29:44.970429 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:29:44.970643 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:29:44.970819 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Aug 5 22:29:44.970993 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 5 22:29:44.971176 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 5 22:29:44.971375 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 5 22:29:44.971577 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 5 22:29:44.971792 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 5 22:29:44.971967 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 5 22:29:44.972155 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Aug 5 22:29:44.972341 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Aug 5 22:29:44.972560 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 5 22:29:44.972794 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 5 22:29:44.972974 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 5 22:29:44.972992 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:29:44.973004 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:29:44.973017 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:29:44.973029 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:29:44.973041 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:29:44.973053 kernel: iommu: Default domain type: Translated Aug 5 22:29:44.973071 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:29:44.973083 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:29:44.973096 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:29:44.973108 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 5 22:29:44.973120 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Aug 5 22:29:44.973294 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 5 22:29:44.973540 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 5 22:29:44.973715 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:29:44.973738 kernel: vgaarb: loaded Aug 5 22:29:44.973751 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 5 22:29:44.973764 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 5 22:29:44.973776 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:29:44.973789 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:29:44.973801 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:29:44.973813 kernel: pnp: PnP ACPI init Aug 5 22:29:44.974005 kernel: pnp 00:02: [dma 2] Aug 5 22:29:44.974028 kernel: pnp: PnP ACPI: found 6 devices Aug 5 22:29:44.974041 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:29:44.974053 kernel: NET: Registered PF_INET protocol family Aug 5 22:29:44.974066 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:29:44.974079 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 22:29:44.974091 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:29:44.974104 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:29:44.974116 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 22:29:44.974129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 22:29:44.974145 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:29:44.974157 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:29:44.974169 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:29:44.974182 kernel: NET: Registered PF_XDP protocol family Aug 5 22:29:44.974353 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:29:44.974560 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:29:44.974729 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:29:44.974888 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Aug 5 22:29:44.975053 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 5 22:29:44.975230 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 5 22:29:44.975419 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:29:44.975452 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:29:44.975465 kernel: Initialise system trusted keyrings Aug 5 22:29:44.975495 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 22:29:44.975507 kernel: Key type asymmetric registered Aug 5 22:29:44.975520 kernel: Asymmetric key parser 'x509' registered Aug 5 22:29:44.975532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:29:44.975549 kernel: io scheduler mq-deadline registered Aug 5 22:29:44.975567 kernel: io scheduler kyber registered Aug 5 22:29:44.975580 kernel: io scheduler bfq registered Aug 5 22:29:44.975593 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:29:44.975605 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 5 22:29:44.975618 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 5 22:29:44.975630 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 5 22:29:44.975644 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:29:44.975658 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:29:44.975674 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:29:44.975686 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:29:44.975699 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:29:44.975711 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:29:44.975895 kernel: rtc_cmos 00:05: RTC can wake from S4 Aug 5 22:29:44.976059 kernel: rtc_cmos 00:05: registered as rtc0 Aug 5 22:29:44.976222 kernel: rtc_cmos 00:05: setting system clock to 2024-08-05T22:29:44 UTC (1722896984) Aug 5 22:29:44.976401 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 5 22:29:44.976424 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 5 22:29:44.976549 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:29:44.976563 kernel: Segment Routing with IPv6 Aug 5 22:29:44.976575 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:29:44.976588 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:29:44.976600 kernel: Key type dns_resolver registered Aug 5 22:29:44.976612 kernel: IPI shorthand broadcast: enabled Aug 5 22:29:44.976625 kernel: sched_clock: Marking stable (779002309, 126258231)->(921972247, -16711707) Aug 5 22:29:44.976637 kernel: registered taskstats version 1 Aug 5 22:29:44.976654 kernel: Loading compiled-in X.509 certificates Aug 5 22:29:44.976666 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:29:44.976679 kernel: Key type .fscrypt registered Aug 5 22:29:44.976690 kernel: Key type fscrypt-provisioning registered Aug 5 22:29:44.976703 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:29:44.976715 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:29:44.976728 kernel: ima: No architecture policies found Aug 5 22:29:44.976739 kernel: clk: Disabling unused clocks Aug 5 22:29:44.976755 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:29:44.976768 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:29:44.976780 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:29:44.976792 kernel: Run /init as init process Aug 5 22:29:44.976804 kernel: with arguments: Aug 5 22:29:44.976816 kernel: /init Aug 5 22:29:44.976828 kernel: with environment: Aug 5 22:29:44.976840 kernel: HOME=/ Aug 5 22:29:44.976878 kernel: TERM=linux Aug 5 22:29:44.976893 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:29:44.976912 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:29:44.976928 systemd[1]: Detected virtualization kvm. Aug 5 22:29:44.976942 systemd[1]: Detected architecture x86-64. Aug 5 22:29:44.976955 systemd[1]: Running in initrd. Aug 5 22:29:44.976968 systemd[1]: No hostname configured, using default hostname. Aug 5 22:29:44.976981 systemd[1]: Hostname set to . Aug 5 22:29:44.976999 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:29:44.977012 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:29:44.977025 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:29:44.977039 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:29:44.977054 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:29:44.977067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:29:44.977081 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:29:44.977095 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:29:44.977115 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:29:44.977129 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:29:44.977143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:29:44.977157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:29:44.977170 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:29:44.977183 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:29:44.977197 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:29:44.977214 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:29:44.977227 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:29:44.977240 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:29:44.977254 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:29:44.977268 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:29:44.977282 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:29:44.977295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:29:44.977309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:29:44.977322 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:29:44.977351 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:29:44.977364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:29:44.977378 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:29:44.977392 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:29:44.977405 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:29:44.977422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:29:44.977449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:29:44.977463 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:29:44.977478 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:29:44.977491 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:29:44.977506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:29:44.977549 systemd-journald[193]: Collecting audit messages is disabled. Aug 5 22:29:44.977583 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:29:44.977600 systemd-journald[193]: Journal started Aug 5 22:29:44.977627 systemd-journald[193]: Runtime Journal (/run/log/journal/e4d58b8c1d6349b0a42a4052f090367d) is 6.0M, max 48.4M, 42.3M free. Aug 5 22:29:44.961039 systemd-modules-load[194]: Inserted module 'overlay' Aug 5 22:29:44.999789 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:29:44.999809 kernel: Bridge firewalling registered Aug 5 22:29:44.994202 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 5 22:29:45.001460 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:29:45.001647 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:29:45.011645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:29:45.013569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:29:45.015680 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:29:45.019496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:29:45.023574 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:29:45.031955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:29:45.035537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:29:45.037290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:29:45.039982 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:29:45.043832 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:29:45.058016 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:29:45.075289 dracut-cmdline[229]: dracut-dracut-053 Aug 5 22:29:45.078404 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:29:45.090617 systemd-resolved[227]: Positive Trust Anchors: Aug 5 22:29:45.090632 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:29:45.090672 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:29:45.093950 systemd-resolved[227]: Defaulting to hostname 'linux'. Aug 5 22:29:45.095226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:29:45.101714 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:29:45.170494 kernel: SCSI subsystem initialized Aug 5 22:29:45.182489 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:29:45.196476 kernel: iscsi: registered transport (tcp) Aug 5 22:29:45.222633 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:29:45.222732 kernel: QLogic iSCSI HBA Driver Aug 5 22:29:45.282598 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:29:45.299632 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:29:45.334691 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:29:45.334750 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:29:45.336117 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:29:45.383474 kernel: raid6: avx2x4 gen() 29875 MB/s Aug 5 22:29:45.400466 kernel: raid6: avx2x2 gen() 30774 MB/s Aug 5 22:29:45.417569 kernel: raid6: avx2x1 gen() 25628 MB/s Aug 5 22:29:45.417613 kernel: raid6: using algorithm avx2x2 gen() 30774 MB/s Aug 5 22:29:45.435581 kernel: raid6: .... xor() 19448 MB/s, rmw enabled Aug 5 22:29:45.435624 kernel: raid6: using avx2x2 recovery algorithm Aug 5 22:29:45.460472 kernel: xor: automatically using best checksumming function avx Aug 5 22:29:45.648482 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:29:45.663879 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:29:45.673687 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:29:45.689535 systemd-udevd[412]: Using default interface naming scheme 'v255'. Aug 5 22:29:45.695260 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:29:45.707633 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:29:45.722458 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Aug 5 22:29:45.759381 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:29:45.769778 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:29:45.845427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:29:45.853613 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:29:45.881651 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 5 22:29:45.908351 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:29:45.908369 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 22:29:45.908531 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:29:45.908543 kernel: GPT:9289727 != 19775487 Aug 5 22:29:45.908554 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:29:45.908570 kernel: GPT:9289727 != 19775487 Aug 5 22:29:45.908580 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:29:45.908590 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:29:45.908601 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:29:45.883727 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:29:45.924506 kernel: AES CTR mode by8 optimization enabled Aug 5 22:29:45.887070 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:29:45.892455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:29:45.894022 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:29:45.905584 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:29:45.929066 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:29:45.929191 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:29:45.939901 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:29:45.943591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:29:45.943739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:29:45.949813 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:29:45.961463 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Aug 5 22:29:45.965632 kernel: libata version 3.00 loaded. Aug 5 22:29:45.965667 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (472) Aug 5 22:29:45.968689 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 5 22:29:45.976040 kernel: scsi host0: ata_piix Aug 5 22:29:45.976267 kernel: scsi host1: ata_piix Aug 5 22:29:45.976507 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Aug 5 22:29:45.976525 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Aug 5 22:29:45.976804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:29:45.980527 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:29:46.002753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:29:46.041070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:29:46.050035 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:29:46.057530 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:29:46.059106 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:29:46.070109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:29:46.080736 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:29:46.084679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:29:46.109812 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:29:46.131473 kernel: ata2: found unknown device (class 0) Aug 5 22:29:46.133486 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 5 22:29:46.135507 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 5 22:29:46.185002 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 5 22:29:46.197594 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 22:29:46.197618 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Aug 5 22:29:46.201621 disk-uuid[550]: Primary Header is updated. Aug 5 22:29:46.201621 disk-uuid[550]: Secondary Entries is updated. Aug 5 22:29:46.201621 disk-uuid[550]: Secondary Header is updated. Aug 5 22:29:46.206478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:29:46.212487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:29:47.223404 disk-uuid[562]: The operation has completed successfully. Aug 5 22:29:47.225059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:29:47.681070 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:29:47.681201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:29:47.692788 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:29:47.699697 sh[578]: Success Aug 5 22:29:47.714484 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 5 22:29:47.751860 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:29:47.762591 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:29:47.768587 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:29:47.779564 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:29:47.779616 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:29:47.779627 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:29:47.780768 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:29:47.782478 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:29:47.787071 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:29:47.787841 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:29:47.795700 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:29:47.797797 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:29:47.912994 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:29:47.917685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:29:48.125808 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:29:48.125914 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:29:48.125931 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:29:48.130530 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:29:48.142867 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:29:48.145070 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:29:48.152924 systemd-networkd[736]: lo: Link UP Aug 5 22:29:48.152935 systemd-networkd[736]: lo: Gained carrier Aug 5 22:29:48.154597 systemd-networkd[736]: Enumeration completed Aug 5 22:29:48.154988 systemd-networkd[736]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:29:48.154992 systemd-networkd[736]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:29:48.155339 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:29:48.155779 systemd-networkd[736]: eth0: Link UP Aug 5 22:29:48.155784 systemd-networkd[736]: eth0: Gained carrier Aug 5 22:29:48.155791 systemd-networkd[736]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:29:48.158285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:29:48.163289 systemd[1]: Reached target network.target - Network. Aug 5 22:29:48.167522 systemd-networkd[736]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:29:48.172632 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:29:48.297522 ignition[760]: Ignition 2.19.0 Aug 5 22:29:48.297535 ignition[760]: Stage: fetch-offline Aug 5 22:29:48.297589 ignition[760]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:29:48.297600 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:29:48.297701 ignition[760]: parsed url from cmdline: "" Aug 5 22:29:48.297705 ignition[760]: no config URL provided Aug 5 22:29:48.297711 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:29:48.297723 ignition[760]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:29:48.297756 ignition[760]: op(1): [started] loading QEMU firmware config module Aug 5 22:29:48.297762 ignition[760]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 22:29:48.318168 ignition[760]: op(1): [finished] loading QEMU firmware config module Aug 5 22:29:48.359188 ignition[760]: parsing config with SHA512: 369df445f40d03a60227f326cc8a69b414cd381f94173fd5aaef3cf8971156da07661d7393ab43d81bbd350dc16823ab234b689691a56cef66404fed42a968a0 Aug 5 22:29:48.363049 unknown[760]: fetched base config from "system" Aug 5 22:29:48.363616 unknown[760]: fetched user config from "qemu" Aug 5 22:29:48.364595 ignition[760]: fetch-offline: fetch-offline passed Aug 5 22:29:48.364684 ignition[760]: Ignition finished successfully Aug 5 22:29:48.366688 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:29:48.369120 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 22:29:48.376693 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:29:48.402468 ignition[773]: Ignition 2.19.0 Aug 5 22:29:48.402480 ignition[773]: Stage: kargs Aug 5 22:29:48.402681 ignition[773]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:29:48.402694 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:29:48.403836 ignition[773]: kargs: kargs passed Aug 5 22:29:48.403912 ignition[773]: Ignition finished successfully Aug 5 22:29:48.407186 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:29:48.424701 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:29:48.441584 ignition[782]: Ignition 2.19.0 Aug 5 22:29:48.441600 ignition[782]: Stage: disks Aug 5 22:29:48.441847 ignition[782]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:29:48.441862 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:29:48.443064 ignition[782]: disks: disks passed Aug 5 22:29:48.445081 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:29:48.443126 ignition[782]: Ignition finished successfully Aug 5 22:29:48.447216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:29:48.449507 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:29:48.451035 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:29:48.452249 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:29:48.454188 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:29:48.484748 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:29:48.509363 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:29:48.518908 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:29:48.527574 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:29:48.653492 kernel: EXT4-fs (vda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:29:48.654289 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:29:48.655803 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:29:48.676578 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:29:48.678599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:29:48.680056 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:29:48.686577 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Aug 5 22:29:48.680119 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:29:48.680153 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:29:48.695537 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:29:48.695569 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:29:48.695585 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:29:48.695610 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:29:48.690688 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:29:48.696732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:29:48.708654 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:29:48.755120 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:29:48.759966 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:29:48.764319 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:29:48.768321 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:29:48.868149 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:29:48.875662 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:29:48.878407 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:29:48.886111 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:29:48.888291 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:29:48.913495 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:29:48.924289 ignition[914]: INFO : Ignition 2.19.0 Aug 5 22:29:48.924289 ignition[914]: INFO : Stage: mount Aug 5 22:29:48.926250 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:29:48.926250 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:29:48.926250 ignition[914]: INFO : mount: mount passed Aug 5 22:29:48.926250 ignition[914]: INFO : Ignition finished successfully Aug 5 22:29:48.932537 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:29:48.940633 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:29:48.950197 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:29:48.967159 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Aug 5 22:29:48.967243 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:29:48.967273 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:29:48.968218 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:29:48.973493 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:29:48.975619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:29:49.034376 ignition[945]: INFO : Ignition 2.19.0 Aug 5 22:29:49.034376 ignition[945]: INFO : Stage: files Aug 5 22:29:49.036721 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:29:49.036721 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:29:49.036721 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:29:49.036721 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:29:49.036721 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:29:49.044953 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:29:49.044953 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:29:49.044953 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:29:49.044953 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:29:49.044953 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:29:49.039124 unknown[945]: wrote ssh authorized keys file for user: core Aug 5 22:29:49.078077 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:29:49.162020 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:29:49.162020 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 5 22:29:49.166309 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 5 22:29:49.653272 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 5 22:29:49.832183 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 5 22:29:49.832183 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:29:49.836297 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Aug 5 22:29:50.112177 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 5 22:29:50.138666 systemd-networkd[736]: eth0: Gained IPv6LL Aug 5 22:29:50.865833 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:29:50.865833 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 5 22:29:50.870244 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:29:50.872758 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:29:50.872758 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 5 22:29:50.872758 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 5 22:29:50.877979 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:29:50.880354 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:29:50.880354 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 5 22:29:50.880354 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 22:29:50.942805 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:29:50.949473 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:29:50.951811 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 22:29:50.951811 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:29:50.951811 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:29:50.951811 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:29:50.951811 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:29:50.951811 ignition[945]: INFO : files: files passed Aug 5 22:29:50.951811 ignition[945]: INFO : Ignition finished successfully Aug 5 22:29:50.967670 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:29:50.980756 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:29:50.984879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:29:50.991229 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:29:50.991383 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:29:51.002730 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 22:29:51.007546 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:29:51.007546 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:29:51.012293 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:29:51.010823 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:29:51.012537 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:29:51.036684 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:29:51.068984 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:29:51.069198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:29:51.071845 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:29:51.073970 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:29:51.076175 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:29:51.086628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:29:51.101919 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:29:51.110653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:29:51.121624 systemd[1]: Stopped target network.target - Network. Aug 5 22:29:51.122929 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:29:51.125454 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:29:51.128347 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:29:51.131005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:29:51.131203 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:29:51.135382 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:29:51.137636 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:29:51.139893 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:29:51.146523 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:29:51.149264 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:29:51.152066 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:29:51.154672 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:29:51.157430 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:29:51.160245 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:29:51.162759 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:29:51.164966 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:29:51.165130 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:29:51.167806 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:29:51.169775 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:29:51.172515 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:29:51.172664 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:29:51.175364 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:29:51.175555 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:29:51.178454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:29:51.178585 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:29:51.180856 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:29:51.183574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:29:51.187518 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:29:51.190539 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:29:51.192672 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:29:51.195223 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:29:51.195367 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:29:51.198255 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:29:51.198369 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:29:51.200659 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:29:51.200813 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:29:51.203268 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:29:51.203426 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:29:51.218718 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:29:51.219915 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:29:51.220079 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:29:51.223826 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:29:51.225324 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:29:51.228014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:29:51.230236 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:29:51.230471 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:29:51.235352 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:29:51.241234 ignition[999]: INFO : Ignition 2.19.0 Aug 5 22:29:51.241234 ignition[999]: INFO : Stage: umount Aug 5 22:29:51.241234 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:29:51.241234 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:29:51.241234 ignition[999]: INFO : umount: umount passed Aug 5 22:29:51.241234 ignition[999]: INFO : Ignition finished successfully Aug 5 22:29:51.235559 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:29:51.240551 systemd-networkd[736]: eth0: DHCPv6 lease lost Aug 5 22:29:51.243238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:29:51.243357 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:29:51.245931 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:29:51.246042 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:29:51.251423 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:29:51.251572 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:29:51.253819 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:29:51.253901 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:29:51.255011 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:29:51.255074 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:29:51.256985 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:29:51.257047 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:29:51.259699 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:29:51.259824 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:29:51.262882 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:29:51.262975 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:29:51.276047 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:29:51.277924 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:29:51.278016 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:29:51.280383 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:29:51.283474 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:29:51.283619 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:29:51.294223 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:29:51.310298 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:29:51.310558 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:29:51.315243 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:29:51.315414 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:29:51.318489 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:29:51.318587 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:29:51.319958 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:29:51.320010 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:29:51.322350 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:29:51.322425 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:29:51.341152 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:29:51.341253 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:29:51.343617 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:29:51.343674 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:29:51.359648 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:29:51.362235 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:29:51.362309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:29:51.363685 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:29:51.363746 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:29:51.366242 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:29:51.366295 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:29:51.368981 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:29:51.369081 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:29:51.371515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:29:51.371585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:29:51.374740 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:29:51.374893 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:29:51.719045 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:29:51.719280 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:29:51.722100 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:29:51.723630 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:29:51.723705 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:29:51.737862 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:29:51.747554 systemd[1]: Switching root. Aug 5 22:29:51.779217 systemd-journald[193]: Journal stopped Aug 5 22:29:53.367448 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 5 22:29:53.367515 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:29:53.367531 kernel: SELinux: policy capability open_perms=1 Aug 5 22:29:53.367545 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:29:53.367558 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:29:53.367580 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:29:53.367595 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:29:53.367608 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:29:53.367622 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:29:53.367640 kernel: audit: type=1403 audit(1722896992.467:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:29:53.367655 systemd[1]: Successfully loaded SELinux policy in 46.622ms. Aug 5 22:29:53.367681 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.449ms. Aug 5 22:29:53.367697 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:29:53.367711 systemd[1]: Detected virtualization kvm. Aug 5 22:29:53.367729 systemd[1]: Detected architecture x86-64. Aug 5 22:29:53.367743 systemd[1]: Detected first boot. Aug 5 22:29:53.367758 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:29:53.367773 zram_generator::config[1044]: No configuration found. Aug 5 22:29:53.367793 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:29:53.367812 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:29:53.367837 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:29:53.367852 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:29:53.367870 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:29:53.367885 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:29:53.367900 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:29:53.367915 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:29:53.367931 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:29:53.367945 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:29:53.367960 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:29:53.367975 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:29:53.367989 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:29:53.368007 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:29:53.368022 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:29:53.368037 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:29:53.368052 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:29:53.368067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:29:53.368081 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:29:53.368096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:29:53.368122 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:29:53.368137 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:29:53.368155 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:29:53.368178 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:29:53.368193 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:29:53.368208 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:29:53.368223 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:29:53.368238 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:29:53.368252 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:29:53.368269 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:29:53.368284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:29:53.368299 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:29:53.368313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:29:53.368328 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:29:53.368343 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:29:53.368358 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:29:53.368372 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:29:53.368387 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:29:53.368404 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:29:53.368418 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:29:53.368436 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:29:53.369594 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:29:53.369612 systemd[1]: Reached target machines.target - Containers. Aug 5 22:29:53.369630 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:29:53.369647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:29:53.369665 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:29:53.369692 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:29:53.369714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:29:53.369731 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:29:53.369749 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:29:53.369766 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:29:53.369783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:29:53.369801 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:29:53.369818 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:29:53.369835 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:29:53.369855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:29:53.369871 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:29:53.369889 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:29:53.369905 kernel: fuse: init (API version 7.39) Aug 5 22:29:53.369921 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:29:53.369938 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:29:53.369955 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:29:53.369971 kernel: loop: module loaded Aug 5 22:29:53.369988 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:29:53.370008 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:29:53.370025 systemd[1]: Stopped verity-setup.service. Aug 5 22:29:53.370043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:29:53.370096 systemd-journald[1117]: Collecting audit messages is disabled. Aug 5 22:29:53.370138 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:29:53.370156 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:29:53.370173 systemd-journald[1117]: Journal started Aug 5 22:29:53.370207 systemd-journald[1117]: Runtime Journal (/run/log/journal/e4d58b8c1d6349b0a42a4052f090367d) is 6.0M, max 48.4M, 42.3M free. Aug 5 22:29:53.110550 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:29:53.131157 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:29:53.131813 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:29:53.375585 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:29:53.376698 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:29:53.378489 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:29:53.380230 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:29:53.382032 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:29:53.384130 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:29:53.386478 kernel: ACPI: bus type drm_connector registered Aug 5 22:29:53.387277 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:29:53.389992 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:29:53.390261 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:29:53.392397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:29:53.392662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:29:53.394859 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:29:53.395121 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:29:53.397265 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:29:53.397531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:29:53.399994 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:29:53.400256 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:29:53.402409 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:29:53.402679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:29:53.404701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:29:53.406921 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:29:53.409181 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:29:53.466710 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:29:53.480582 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:29:53.483272 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:29:53.484546 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:29:53.484587 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:29:53.486963 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:29:53.489764 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:29:53.492350 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:29:53.493674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:29:53.495656 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:29:53.498729 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:29:53.500256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:29:53.503222 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:29:53.504709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:29:53.508662 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:29:53.513758 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:29:53.517572 systemd-journald[1117]: Time spent on flushing to /var/log/journal/e4d58b8c1d6349b0a42a4052f090367d is 23.925ms for 948 entries. Aug 5 22:29:53.517572 systemd-journald[1117]: System Journal (/var/log/journal/e4d58b8c1d6349b0a42a4052f090367d) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:29:53.582088 systemd-journald[1117]: Received client request to flush runtime journal. Aug 5 22:29:53.582146 kernel: loop0: detected capacity change from 0 to 210664 Aug 5 22:29:53.582160 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:29:53.523837 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:29:53.528950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:29:53.539604 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:29:53.541025 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:29:53.543004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:29:53.544609 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:29:53.550139 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:29:53.560983 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:29:53.562809 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:29:53.588420 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:29:53.597779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:29:53.601675 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 22:29:53.612387 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:29:53.613189 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:29:53.616657 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:29:53.625614 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:29:53.635686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:29:53.646472 kernel: loop1: detected capacity change from 0 to 139760 Aug 5 22:29:53.669617 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Aug 5 22:29:53.669643 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Aug 5 22:29:53.678274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:29:53.734532 kernel: loop2: detected capacity change from 0 to 80568 Aug 5 22:29:53.800502 kernel: loop3: detected capacity change from 0 to 210664 Aug 5 22:29:53.809474 kernel: loop4: detected capacity change from 0 to 139760 Aug 5 22:29:53.819484 kernel: loop5: detected capacity change from 0 to 80568 Aug 5 22:29:53.845455 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 22:29:53.846237 (sd-merge)[1183]: Merged extensions into '/usr'. Aug 5 22:29:53.852046 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:29:53.852073 systemd[1]: Reloading... Aug 5 22:29:53.931461 zram_generator::config[1208]: No configuration found. Aug 5 22:29:54.144197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:29:54.213070 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:29:54.219152 systemd[1]: Reloading finished in 366 ms. Aug 5 22:29:54.266170 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:29:54.268645 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:29:54.295217 systemd[1]: Starting ensure-sysext.service... Aug 5 22:29:54.299538 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:29:54.313333 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:29:54.314620 systemd[1]: Reloading... Aug 5 22:29:54.344068 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:29:54.344652 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:29:54.346086 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:29:54.347069 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Aug 5 22:29:54.347255 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Aug 5 22:29:54.355919 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:29:54.356096 systemd-tmpfiles[1245]: Skipping /boot Aug 5 22:29:54.369814 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:29:54.370006 systemd-tmpfiles[1245]: Skipping /boot Aug 5 22:29:54.446498 zram_generator::config[1270]: No configuration found. Aug 5 22:29:54.639424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:29:54.715863 systemd[1]: Reloading finished in 400 ms. Aug 5 22:29:54.743877 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:29:54.760397 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:29:54.784932 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:29:54.790347 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:29:54.795557 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:29:54.806171 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:29:54.814328 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:29:54.822769 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:29:54.840758 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:29:54.840993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:29:54.846805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:29:54.852292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:29:54.857127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:29:54.865914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:29:54.888168 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:29:54.892423 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:29:54.894153 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:29:54.898835 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:29:54.899352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:29:54.901924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:29:54.902168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:29:54.907254 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:29:54.907857 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:29:54.910725 augenrules[1334]: No rules Aug 5 22:29:54.912122 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Aug 5 22:29:54.916044 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:29:54.936834 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:29:54.937276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:29:54.960507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:29:54.985045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:29:54.992473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:29:54.994050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:29:54.999573 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:29:55.000953 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:29:55.002315 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:29:55.005124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:29:55.008262 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:29:55.023426 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:29:55.026726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:29:55.026974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:29:55.029193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:29:55.029411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:29:55.032041 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:29:55.032287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:29:55.046887 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:29:55.063799 systemd[1]: Finished ensure-sysext.service. Aug 5 22:29:55.070225 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:29:55.070427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:29:55.080482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:29:55.087853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:29:55.091665 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:29:55.095825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:29:55.097338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:29:55.100999 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:29:55.119772 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:29:55.121344 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:29:55.121388 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:29:55.122281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:29:55.122541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:29:55.127744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:29:55.128013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:29:55.138473 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1354) Aug 5 22:29:55.141925 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:29:55.142224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:29:55.144429 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:29:55.144754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:29:55.162936 systemd-resolved[1314]: Positive Trust Anchors: Aug 5 22:29:55.179411 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1370) Aug 5 22:29:55.162966 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:29:55.163013 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:29:55.188851 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:29:55.188945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:29:55.189044 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:29:55.206209 systemd-resolved[1314]: Defaulting to hostname 'linux'. Aug 5 22:29:55.210152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:29:55.213708 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:29:55.263269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:29:55.276894 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:29:55.294590 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 5 22:29:55.315512 systemd-networkd[1384]: lo: Link UP Aug 5 22:29:55.315539 systemd-networkd[1384]: lo: Gained carrier Aug 5 22:29:55.317483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:29:55.318765 systemd-networkd[1384]: Enumeration completed Aug 5 22:29:55.320634 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:29:55.320652 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:29:55.321907 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:29:55.322290 systemd-networkd[1384]: eth0: Link UP Aug 5 22:29:55.322307 systemd-networkd[1384]: eth0: Gained carrier Aug 5 22:29:55.322328 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:29:55.330365 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:29:55.325193 systemd[1]: Reached target network.target - Network. Aug 5 22:29:55.337816 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:29:55.340520 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:29:55.342169 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:29:55.356476 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 5 22:29:55.368514 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:29:55.372980 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Aug 5 22:29:55.374690 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 22:29:55.374768 systemd-timesyncd[1385]: Initial clock synchronization to Mon 2024-08-05 22:29:55.162954 UTC. Aug 5 22:29:55.412481 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 5 22:29:55.432582 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:29:55.452102 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:29:55.607886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:29:55.752893 kernel: kvm_amd: TSC scaling supported Aug 5 22:29:55.753017 kernel: kvm_amd: Nested Virtualization enabled Aug 5 22:29:55.753115 kernel: kvm_amd: Nested Paging enabled Aug 5 22:29:55.753487 kernel: kvm_amd: LBR virtualization supported Aug 5 22:29:55.758949 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 5 22:29:55.759006 kernel: kvm_amd: Virtual GIF supported Aug 5 22:29:55.914770 kernel: EDAC MC: Ver: 3.0.0 Aug 5 22:29:55.963341 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:29:55.988560 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:29:56.008008 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:29:56.046642 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:29:56.048816 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:29:56.050542 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:29:56.052576 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:29:56.059168 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:29:56.060979 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:29:56.062960 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:29:56.067932 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:29:56.070702 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:29:56.071786 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:29:56.073790 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:29:56.082014 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:29:56.086130 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:29:56.097189 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:29:56.102301 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:29:56.104788 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:29:56.108330 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:29:56.108502 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:29:56.119231 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:29:56.119281 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:29:56.126325 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:29:56.133595 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:29:56.133798 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:29:56.139657 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:29:56.143717 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:29:56.146398 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:29:56.151179 jq[1422]: false Aug 5 22:29:56.151821 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:29:56.160311 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:29:56.164854 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:29:56.170175 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:29:56.188689 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:29:56.194716 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:29:56.195508 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:29:56.204652 extend-filesystems[1423]: Found loop3 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found loop4 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found loop5 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found sr0 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found vda Aug 5 22:29:56.204652 extend-filesystems[1423]: Found vda1 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found vda2 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found vda3 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found usr Aug 5 22:29:56.204652 extend-filesystems[1423]: Found vda4 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found vda6 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found vda7 Aug 5 22:29:56.204652 extend-filesystems[1423]: Found vda9 Aug 5 22:29:56.204652 extend-filesystems[1423]: Checking size of /dev/vda9 Aug 5 22:29:56.257728 extend-filesystems[1423]: Resized partition /dev/vda9 Aug 5 22:29:56.205824 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:29:56.235619 dbus-daemon[1421]: [system] SELinux support is enabled Aug 5 22:29:56.265691 extend-filesystems[1445]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:29:56.280516 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 22:29:56.280557 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Aug 5 22:29:56.280577 update_engine[1435]: I0805 22:29:56.258181 1435 main.cc:92] Flatcar Update Engine starting Aug 5 22:29:56.280577 update_engine[1435]: I0805 22:29:56.265694 1435 update_check_scheduler.cc:74] Next update check in 6m18s Aug 5 22:29:56.211209 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:29:56.281136 jq[1439]: true Aug 5 22:29:56.217070 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:29:56.235023 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:29:56.235616 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:29:56.235849 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:29:56.247372 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:29:56.247674 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:29:56.278230 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:29:56.278528 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:29:56.322847 jq[1447]: true Aug 5 22:29:56.341184 systemd-logind[1431]: Watching system buttons on /dev/input/event1 (Power Button) Aug 5 22:29:56.341210 systemd-logind[1431]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:29:56.341502 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:29:56.342783 systemd-logind[1431]: New seat seat0. Aug 5 22:29:56.352075 tar[1446]: linux-amd64/helm Aug 5 22:29:56.367803 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:29:56.383008 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:29:56.387118 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:29:56.387544 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:29:56.390821 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:29:56.390990 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:29:56.407103 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:29:56.481258 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 22:29:56.507854 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:29:56.519575 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:29:56.520807 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:29:56.520807 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:29:56.520807 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 22:29:56.533632 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Aug 5 22:29:56.522927 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:29:56.523209 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:29:56.541822 bash[1475]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:29:56.543073 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:29:56.549600 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:29:56.575062 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:29:56.594690 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:29:56.604507 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:29:56.604924 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:29:56.610970 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:29:56.630295 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:29:56.655555 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:29:56.658723 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:29:56.660359 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:29:56.748854 containerd[1448]: time="2024-08-05T22:29:56.747179226Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 22:29:56.797518 containerd[1448]: time="2024-08-05T22:29:56.797318720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:29:56.797518 containerd[1448]: time="2024-08-05T22:29:56.797406231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:29:56.799638 containerd[1448]: time="2024-08-05T22:29:56.799562635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:29:56.799638 containerd[1448]: time="2024-08-05T22:29:56.799615345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:29:56.799940 containerd[1448]: time="2024-08-05T22:29:56.799902974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:29:56.799940 containerd[1448]: time="2024-08-05T22:29:56.799925924Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:29:56.800060 containerd[1448]: time="2024-08-05T22:29:56.800031139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:29:56.800134 containerd[1448]: time="2024-08-05T22:29:56.800104409Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:29:56.800134 containerd[1448]: time="2024-08-05T22:29:56.800123975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:29:56.800247 containerd[1448]: time="2024-08-05T22:29:56.800218206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:29:56.803077 containerd[1448]: time="2024-08-05T22:29:56.800508995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:29:56.803077 containerd[1448]: time="2024-08-05T22:29:56.800534629Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:29:56.803077 containerd[1448]: time="2024-08-05T22:29:56.800546323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:29:56.803077 containerd[1448]: time="2024-08-05T22:29:56.800675892Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:29:56.803077 containerd[1448]: time="2024-08-05T22:29:56.800689616Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:29:56.803077 containerd[1448]: time="2024-08-05T22:29:56.800757863Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:29:56.803077 containerd[1448]: time="2024-08-05T22:29:56.800769178Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:29:56.959301 containerd[1448]: time="2024-08-05T22:29:56.958897602Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:29:56.959301 containerd[1448]: time="2024-08-05T22:29:56.959008688Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:29:56.959301 containerd[1448]: time="2024-08-05T22:29:56.959029180Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:29:56.959301 containerd[1448]: time="2024-08-05T22:29:56.959073004Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:29:56.959301 containerd[1448]: time="2024-08-05T22:29:56.959091069Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:29:56.959301 containerd[1448]: time="2024-08-05T22:29:56.959110264Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:29:56.959301 containerd[1448]: time="2024-08-05T22:29:56.959125265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959370026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959393357Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959412943Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959502824Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959524019Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959613129Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959631788Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959647219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:29:56.959664 containerd[1448]: time="2024-08-05T22:29:56.959664269Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:29:56.959897 containerd[1448]: time="2024-08-05T22:29:56.959682284Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:29:56.959897 containerd[1448]: time="2024-08-05T22:29:56.959700211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:29:56.959897 containerd[1448]: time="2024-08-05T22:29:56.959717280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:29:56.959897 containerd[1448]: time="2024-08-05T22:29:56.959860826Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960329271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960408628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960459592Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960501318Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960586742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960607644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960640212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960658168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960677111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960697886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960717082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960735926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.960757267Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:29:56.961969 containerd[1448]: time="2024-08-05T22:29:56.961049286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.962505 containerd[1448]: time="2024-08-05T22:29:56.961073943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.962505 containerd[1448]: time="2024-08-05T22:29:56.961091109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.962505 containerd[1448]: time="2024-08-05T22:29:56.961108852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.962505 containerd[1448]: time="2024-08-05T22:29:56.961128038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.962505 containerd[1448]: time="2024-08-05T22:29:56.961146637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.962505 containerd[1448]: time="2024-08-05T22:29:56.961164888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.962505 containerd[1448]: time="2024-08-05T22:29:56.961180484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.961553380Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.961630308Z" level=info msg="Connect containerd service" Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.961670991Z" level=info msg="using legacy CRI server" Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.961681487Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.961795878Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.962644594Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.962690241Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.962713612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.962728681Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:29:56.962757 containerd[1448]: time="2024-08-05T22:29:56.962745702Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:29:56.963194 containerd[1448]: time="2024-08-05T22:29:56.962992676Z" level=info msg="Start subscribing containerd event" Aug 5 22:29:56.963194 containerd[1448]: time="2024-08-05T22:29:56.963050653Z" level=info msg="Start recovering state" Aug 5 22:29:56.963194 containerd[1448]: time="2024-08-05T22:29:56.963132956Z" level=info msg="Start event monitor" Aug 5 22:29:56.963194 containerd[1448]: time="2024-08-05T22:29:56.963155438Z" level=info msg="Start snapshots syncer" Aug 5 22:29:56.963194 containerd[1448]: time="2024-08-05T22:29:56.963167231Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:29:56.963194 containerd[1448]: time="2024-08-05T22:29:56.963180524Z" level=info msg="Start streaming server" Aug 5 22:29:56.965767 containerd[1448]: time="2024-08-05T22:29:56.965711113Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:29:56.967082 containerd[1448]: time="2024-08-05T22:29:56.967046200Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:29:56.970839 containerd[1448]: time="2024-08-05T22:29:56.970797532Z" level=info msg="containerd successfully booted in 0.228714s" Aug 5 22:29:56.970945 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:29:56.987642 systemd-networkd[1384]: eth0: Gained IPv6LL Aug 5 22:29:56.994597 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:29:57.009604 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:29:57.046351 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 22:29:57.051515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:29:57.077945 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:29:57.123425 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:29:57.152536 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 22:29:57.153158 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 22:29:57.157323 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:29:57.349978 tar[1446]: linux-amd64/LICENSE Aug 5 22:29:57.350089 tar[1446]: linux-amd64/README.md Aug 5 22:29:57.367143 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:29:58.532764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:29:58.535059 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:29:58.539276 systemd[1]: Startup finished in 931ms (kernel) + 7.746s (initrd) + 6.116s (userspace) = 14.794s. Aug 5 22:29:58.561187 (kubelet)[1534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:29:59.426721 kubelet[1534]: E0805 22:29:59.425086 1534 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:29:59.434854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:29:59.435145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:29:59.435726 systemd[1]: kubelet.service: Consumed 1.403s CPU time. Aug 5 22:30:01.149628 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:30:01.179889 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:50194.service - OpenSSH per-connection server daemon (10.0.0.1:50194). Aug 5 22:30:01.364422 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 50194 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:30:01.370533 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:30:01.406832 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:30:01.430767 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:30:01.436825 systemd-logind[1431]: New session 1 of user core. Aug 5 22:30:01.455999 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:30:01.470159 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:30:01.477719 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:30:01.669700 systemd[1552]: Queued start job for default target default.target. Aug 5 22:30:01.685567 systemd[1552]: Created slice app.slice - User Application Slice. Aug 5 22:30:01.685609 systemd[1552]: Reached target paths.target - Paths. Aug 5 22:30:01.685629 systemd[1552]: Reached target timers.target - Timers. Aug 5 22:30:01.691027 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:30:01.716411 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:30:01.716641 systemd[1552]: Reached target sockets.target - Sockets. Aug 5 22:30:01.716667 systemd[1552]: Reached target basic.target - Basic System. Aug 5 22:30:01.716731 systemd[1552]: Reached target default.target - Main User Target. Aug 5 22:30:01.716777 systemd[1552]: Startup finished in 222ms. Aug 5 22:30:01.717694 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:30:01.728987 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:30:01.814404 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:50202.service - OpenSSH per-connection server daemon (10.0.0.1:50202). Aug 5 22:30:01.873922 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 50202 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:30:01.875087 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:30:01.895543 systemd-logind[1431]: New session 2 of user core. Aug 5 22:30:01.902924 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:30:01.968492 sshd[1563]: pam_unix(sshd:session): session closed for user core Aug 5 22:30:01.982807 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:50202.service: Deactivated successfully. Aug 5 22:30:01.986612 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:30:01.989056 systemd-logind[1431]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:30:02.001127 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:50212.service - OpenSSH per-connection server daemon (10.0.0.1:50212). Aug 5 22:30:02.003259 systemd-logind[1431]: Removed session 2. Aug 5 22:30:02.040558 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 50212 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:30:02.042421 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:30:02.048385 systemd-logind[1431]: New session 3 of user core. Aug 5 22:30:02.066754 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:30:02.120806 sshd[1570]: pam_unix(sshd:session): session closed for user core Aug 5 22:30:02.135628 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:50212.service: Deactivated successfully. Aug 5 22:30:02.137864 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:30:02.139599 systemd-logind[1431]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:30:02.154911 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:50224.service - OpenSSH per-connection server daemon (10.0.0.1:50224). Aug 5 22:30:02.156257 systemd-logind[1431]: Removed session 3. Aug 5 22:30:02.192273 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 50224 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:30:02.194224 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:30:02.198635 systemd-logind[1431]: New session 4 of user core. Aug 5 22:30:02.213698 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:30:02.271724 sshd[1577]: pam_unix(sshd:session): session closed for user core Aug 5 22:30:02.284348 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:50224.service: Deactivated successfully. Aug 5 22:30:02.286137 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:30:02.287799 systemd-logind[1431]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:30:02.289232 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:50234.service - OpenSSH per-connection server daemon (10.0.0.1:50234). Aug 5 22:30:02.289958 systemd-logind[1431]: Removed session 4. Aug 5 22:30:02.345762 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 50234 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:30:02.347409 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:30:02.352379 systemd-logind[1431]: New session 5 of user core. Aug 5 22:30:02.362566 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:30:02.460781 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:30:02.461179 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:30:02.483951 sudo[1587]: pam_unix(sudo:session): session closed for user root Aug 5 22:30:02.487297 sshd[1584]: pam_unix(sshd:session): session closed for user core Aug 5 22:30:02.500042 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:50234.service: Deactivated successfully. Aug 5 22:30:02.501979 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:30:02.503500 systemd-logind[1431]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:30:02.505038 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:50242.service - OpenSSH per-connection server daemon (10.0.0.1:50242). Aug 5 22:30:02.505804 systemd-logind[1431]: Removed session 5. Aug 5 22:30:02.548741 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 50242 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:30:02.550479 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:30:02.554794 systemd-logind[1431]: New session 6 of user core. Aug 5 22:30:02.567666 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:30:02.624320 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:30:02.624678 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:30:02.629463 sudo[1596]: pam_unix(sudo:session): session closed for user root Aug 5 22:30:02.637154 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:30:02.637558 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:30:02.657752 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:30:02.659526 auditctl[1599]: No rules Aug 5 22:30:02.659931 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:30:02.660145 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:30:02.663055 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:30:02.695371 augenrules[1617]: No rules Aug 5 22:30:02.697259 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:30:02.698866 sudo[1595]: pam_unix(sudo:session): session closed for user root Aug 5 22:30:02.701125 sshd[1592]: pam_unix(sshd:session): session closed for user core Aug 5 22:30:02.713464 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:50242.service: Deactivated successfully. Aug 5 22:30:02.715734 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:30:02.717989 systemd-logind[1431]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:30:02.723883 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:50246.service - OpenSSH per-connection server daemon (10.0.0.1:50246). Aug 5 22:30:02.724927 systemd-logind[1431]: Removed session 6. Aug 5 22:30:02.759976 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 50246 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:30:02.761830 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:30:02.766825 systemd-logind[1431]: New session 7 of user core. Aug 5 22:30:02.773682 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:30:02.829306 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:30:02.829705 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:30:02.957796 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:30:02.957883 (dockerd)[1639]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:30:03.211978 dockerd[1639]: time="2024-08-05T22:30:03.211805827Z" level=info msg="Starting up" Aug 5 22:30:03.785465 dockerd[1639]: time="2024-08-05T22:30:03.785380463Z" level=info msg="Loading containers: start." Aug 5 22:30:03.921467 kernel: Initializing XFRM netlink socket Aug 5 22:30:04.011645 systemd-networkd[1384]: docker0: Link UP Aug 5 22:30:04.040004 dockerd[1639]: time="2024-08-05T22:30:04.039844034Z" level=info msg="Loading containers: done." Aug 5 22:30:05.293100 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck856630244-merged.mount: Deactivated successfully. Aug 5 22:30:05.295831 dockerd[1639]: time="2024-08-05T22:30:05.295786887Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:30:05.296187 dockerd[1639]: time="2024-08-05T22:30:05.296040831Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:30:05.296187 dockerd[1639]: time="2024-08-05T22:30:05.296176500Z" level=info msg="Daemon has completed initialization" Aug 5 22:30:05.536602 dockerd[1639]: time="2024-08-05T22:30:05.536500931Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:30:05.536780 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:30:06.682376 containerd[1448]: time="2024-08-05T22:30:06.682041049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\"" Aug 5 22:30:08.581110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673304638.mount: Deactivated successfully. Aug 5 22:30:09.903164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:30:09.913890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:30:10.429290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:30:10.461003 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:30:10.554709 kubelet[1841]: E0805 22:30:10.554637 1841 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:30:10.564834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:30:10.565125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:30:11.053557 containerd[1448]: time="2024-08-05T22:30:11.053432154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:11.054727 containerd[1448]: time="2024-08-05T22:30:11.054666465Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.3: active requests=0, bytes read=32773238" Aug 5 22:30:11.056259 containerd[1448]: time="2024-08-05T22:30:11.056205878Z" level=info msg="ImageCreate event name:\"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:11.060139 containerd[1448]: time="2024-08-05T22:30:11.060083218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:11.061802 containerd[1448]: time="2024-08-05T22:30:11.061745801Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.3\" with image id \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\", size \"32770038\" in 4.379644986s" Aug 5 22:30:11.061802 containerd[1448]: time="2024-08-05T22:30:11.061794937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\" returns image reference \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\"" Aug 5 22:30:11.120655 containerd[1448]: time="2024-08-05T22:30:11.120583602Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\"" Aug 5 22:30:13.919214 containerd[1448]: time="2024-08-05T22:30:13.919114592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:13.920272 containerd[1448]: time="2024-08-05T22:30:13.920206284Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.3: active requests=0, bytes read=29589535" Aug 5 22:30:13.923549 containerd[1448]: time="2024-08-05T22:30:13.923493957Z" level=info msg="ImageCreate event name:\"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:13.927603 containerd[1448]: time="2024-08-05T22:30:13.927451914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:13.928885 containerd[1448]: time="2024-08-05T22:30:13.928820819Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.3\" with image id \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\", size \"31139481\" in 2.80817458s" Aug 5 22:30:13.928885 containerd[1448]: time="2024-08-05T22:30:13.928870695Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\" returns image reference \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\"" Aug 5 22:30:14.000157 containerd[1448]: time="2024-08-05T22:30:14.000032280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\"" Aug 5 22:30:15.522547 containerd[1448]: time="2024-08-05T22:30:15.522430968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:15.523322 containerd[1448]: time="2024-08-05T22:30:15.523285932Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.3: active requests=0, bytes read=17779544" Aug 5 22:30:15.524870 containerd[1448]: time="2024-08-05T22:30:15.524813240Z" level=info msg="ImageCreate event name:\"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:15.527971 containerd[1448]: time="2024-08-05T22:30:15.527927423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:15.531504 containerd[1448]: time="2024-08-05T22:30:15.530295399Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.3\" with image id \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\", size \"19329508\" in 1.530212226s" Aug 5 22:30:15.531504 containerd[1448]: time="2024-08-05T22:30:15.530352936Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\" returns image reference \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\"" Aug 5 22:30:15.555851 containerd[1448]: time="2024-08-05T22:30:15.555783300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\"" Aug 5 22:30:17.118135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654396172.mount: Deactivated successfully. Aug 5 22:30:18.132513 containerd[1448]: time="2024-08-05T22:30:18.132430092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:18.133348 containerd[1448]: time="2024-08-05T22:30:18.133306819Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.3: active requests=0, bytes read=29036435" Aug 5 22:30:18.134661 containerd[1448]: time="2024-08-05T22:30:18.134613549Z" level=info msg="ImageCreate event name:\"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:18.137077 containerd[1448]: time="2024-08-05T22:30:18.137032415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:18.137793 containerd[1448]: time="2024-08-05T22:30:18.137735399Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.3\" with image id \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\", repo tag \"registry.k8s.io/kube-proxy:v1.30.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\", size \"29035454\" in 2.581897783s" Aug 5 22:30:18.137830 containerd[1448]: time="2024-08-05T22:30:18.137791175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\" returns image reference \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\"" Aug 5 22:30:18.171244 containerd[1448]: time="2024-08-05T22:30:18.171187780Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:30:19.275201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180675036.mount: Deactivated successfully. Aug 5 22:30:20.589299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:30:20.598743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:30:20.820086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:30:20.826232 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:30:21.370540 kubelet[1944]: E0805 22:30:21.370478 1944 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:30:21.374842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:30:21.375045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:30:21.377795 containerd[1448]: time="2024-08-05T22:30:21.377723678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:21.378720 containerd[1448]: time="2024-08-05T22:30:21.378663589Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Aug 5 22:30:21.386383 containerd[1448]: time="2024-08-05T22:30:21.383870770Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:21.388545 containerd[1448]: time="2024-08-05T22:30:21.388485054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:21.389780 containerd[1448]: time="2024-08-05T22:30:21.389731993Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.218501097s" Aug 5 22:30:21.389834 containerd[1448]: time="2024-08-05T22:30:21.389781339Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Aug 5 22:30:21.413648 containerd[1448]: time="2024-08-05T22:30:21.413591695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:30:22.133633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231000229.mount: Deactivated successfully. Aug 5 22:30:22.152926 containerd[1448]: time="2024-08-05T22:30:22.152849493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:22.154120 containerd[1448]: time="2024-08-05T22:30:22.154074074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:30:22.161603 containerd[1448]: time="2024-08-05T22:30:22.159278303Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:22.163839 containerd[1448]: time="2024-08-05T22:30:22.163781352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:22.164843 containerd[1448]: time="2024-08-05T22:30:22.164804511Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 751.170684ms" Aug 5 22:30:22.164843 containerd[1448]: time="2024-08-05T22:30:22.164840418Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:30:22.199254 containerd[1448]: time="2024-08-05T22:30:22.199131653Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Aug 5 22:30:22.993733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209857342.mount: Deactivated successfully. Aug 5 22:30:28.391769 containerd[1448]: time="2024-08-05T22:30:28.391697911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:28.427850 containerd[1448]: time="2024-08-05T22:30:28.427799629Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Aug 5 22:30:28.471456 containerd[1448]: time="2024-08-05T22:30:28.471384398Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:28.518827 containerd[1448]: time="2024-08-05T22:30:28.518777103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:30:28.520285 containerd[1448]: time="2024-08-05T22:30:28.520254788Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 6.321068911s" Aug 5 22:30:28.520354 containerd[1448]: time="2024-08-05T22:30:28.520290126Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Aug 5 22:30:30.381731 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:30:30.396703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:30:30.415107 systemd[1]: Reloading requested from client PID 2093 ('systemctl') (unit session-7.scope)... Aug 5 22:30:30.415132 systemd[1]: Reloading... Aug 5 22:30:30.499484 zram_generator::config[2130]: No configuration found. Aug 5 22:30:31.279564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:30:31.357799 systemd[1]: Reloading finished in 942 ms. Aug 5 22:30:31.417748 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:30:31.417870 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:30:31.418254 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:30:31.420378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:30:31.582396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:30:31.587990 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:30:31.647421 kubelet[2179]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:30:31.647421 kubelet[2179]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:30:31.647421 kubelet[2179]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:30:31.647850 kubelet[2179]: I0805 22:30:31.647476 2179 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:30:32.058046 kubelet[2179]: I0805 22:30:32.057995 2179 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:30:32.058046 kubelet[2179]: I0805 22:30:32.058031 2179 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:30:32.058347 kubelet[2179]: I0805 22:30:32.058275 2179 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:30:32.079476 kubelet[2179]: I0805 22:30:32.079403 2179 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:30:32.094078 kubelet[2179]: E0805 22:30:32.094025 2179 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:32.128900 kubelet[2179]: I0805 22:30:32.128851 2179 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:30:32.129114 kubelet[2179]: I0805 22:30:32.129074 2179 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:30:32.130758 kubelet[2179]: I0805 22:30:32.129101 2179 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:30:32.131604 kubelet[2179]: I0805 22:30:32.131575 2179 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:30:32.131728 kubelet[2179]: I0805 22:30:32.131710 2179 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:30:32.132152 kubelet[2179]: I0805 22:30:32.132134 2179 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:30:32.133615 kubelet[2179]: I0805 22:30:32.133594 2179 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:30:32.133615 kubelet[2179]: I0805 22:30:32.133613 2179 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:30:32.133688 kubelet[2179]: I0805 22:30:32.133635 2179 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:30:32.133688 kubelet[2179]: I0805 22:30:32.133653 2179 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:30:32.134200 kubelet[2179]: W0805 22:30:32.134146 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:32.134259 kubelet[2179]: E0805 22:30:32.134204 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:32.137710 kubelet[2179]: W0805 22:30:32.137681 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:32.137760 kubelet[2179]: E0805 22:30:32.137714 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:32.142263 kubelet[2179]: I0805 22:30:32.142245 2179 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:30:32.144081 kubelet[2179]: I0805 22:30:32.144059 2179 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:30:32.144126 kubelet[2179]: W0805 22:30:32.144118 2179 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:30:32.144814 kubelet[2179]: I0805 22:30:32.144795 2179 server.go:1264] "Started kubelet" Aug 5 22:30:32.146109 kubelet[2179]: I0805 22:30:32.146087 2179 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:30:32.150978 kubelet[2179]: E0805 22:30:32.150856 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f5b8f1905078 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:30:32.144769144 +0000 UTC m=+0.551701533,LastTimestamp:2024-08-05 22:30:32.144769144 +0000 UTC m=+0.551701533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:30:32.151129 kubelet[2179]: I0805 22:30:32.150974 2179 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:30:32.151908 kubelet[2179]: I0805 22:30:32.151880 2179 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:30:32.152693 kubelet[2179]: I0805 22:30:32.152645 2179 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:30:32.152838 kubelet[2179]: I0805 22:30:32.152801 2179 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:30:32.152924 kubelet[2179]: I0805 22:30:32.152900 2179 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:30:32.158469 kubelet[2179]: E0805 22:30:32.157536 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Aug 5 22:30:32.158469 kubelet[2179]: I0805 22:30:32.157650 2179 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:30:32.158469 kubelet[2179]: I0805 22:30:32.157686 2179 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:30:32.158469 kubelet[2179]: W0805 22:30:32.158139 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:32.158469 kubelet[2179]: E0805 22:30:32.158187 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:32.159070 kubelet[2179]: I0805 22:30:32.159046 2179 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:30:32.159149 kubelet[2179]: I0805 22:30:32.159131 2179 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:30:32.160973 kubelet[2179]: I0805 22:30:32.160954 2179 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:30:32.161233 kubelet[2179]: E0805 22:30:32.161209 2179 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:30:32.175245 kubelet[2179]: I0805 22:30:32.175209 2179 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:30:32.175245 kubelet[2179]: I0805 22:30:32.175232 2179 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:30:32.175245 kubelet[2179]: I0805 22:30:32.175250 2179 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:30:32.254825 kubelet[2179]: I0805 22:30:32.254776 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:30:32.255140 kubelet[2179]: E0805 22:30:32.255111 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Aug 5 22:30:32.359113 kubelet[2179]: E0805 22:30:32.359057 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Aug 5 22:30:32.457285 kubelet[2179]: I0805 22:30:32.457243 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:30:32.457720 kubelet[2179]: E0805 22:30:32.457680 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Aug 5 22:30:32.760107 kubelet[2179]: E0805 22:30:32.759970 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Aug 5 22:30:32.859769 kubelet[2179]: I0805 22:30:32.859698 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:30:32.860173 kubelet[2179]: E0805 22:30:32.860133 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Aug 5 22:30:33.193781 kubelet[2179]: W0805 22:30:33.193702 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:33.193781 kubelet[2179]: E0805 22:30:33.193767 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:33.207277 kubelet[2179]: W0805 22:30:33.207204 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:33.207277 kubelet[2179]: E0805 22:30:33.207270 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:33.252279 kubelet[2179]: I0805 22:30:33.252223 2179 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:30:33.253536 kubelet[2179]: I0805 22:30:33.253470 2179 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:30:33.253605 kubelet[2179]: I0805 22:30:33.253552 2179 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:30:33.253605 kubelet[2179]: I0805 22:30:33.253574 2179 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:30:33.253666 kubelet[2179]: E0805 22:30:33.253628 2179 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:30:33.254945 kubelet[2179]: W0805 22:30:33.254048 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:33.254945 kubelet[2179]: E0805 22:30:33.254084 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:33.275178 kubelet[2179]: I0805 22:30:33.275121 2179 policy_none.go:49] "None policy: Start" Aug 5 22:30:33.275870 kubelet[2179]: I0805 22:30:33.275826 2179 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:30:33.275870 kubelet[2179]: I0805 22:30:33.275867 2179 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:30:33.340771 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:30:33.354762 kubelet[2179]: E0805 22:30:33.354718 2179 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:30:33.362112 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:30:33.365172 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:30:33.379638 kubelet[2179]: I0805 22:30:33.379563 2179 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:30:33.379977 kubelet[2179]: I0805 22:30:33.379833 2179 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:30:33.379977 kubelet[2179]: I0805 22:30:33.379963 2179 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:30:33.380920 kubelet[2179]: E0805 22:30:33.380897 2179 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 22:30:33.555980 kubelet[2179]: I0805 22:30:33.555805 2179 topology_manager.go:215] "Topology Admit Handler" podUID="c7e80ade5741b95540d8bed18be0d5a2" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:30:33.557095 kubelet[2179]: I0805 22:30:33.557060 2179 topology_manager.go:215] "Topology Admit Handler" podUID="471a108742c0b3658d07e3bda7ae5d17" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:30:33.558346 kubelet[2179]: I0805 22:30:33.558323 2179 topology_manager.go:215] "Topology Admit Handler" podUID="3b0306f30b5bc847ed1d56b34a56bbaf" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:30:33.560538 kubelet[2179]: E0805 22:30:33.560507 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="1.6s" Aug 5 22:30:33.563911 systemd[1]: Created slice kubepods-burstable-podc7e80ade5741b95540d8bed18be0d5a2.slice - libcontainer container kubepods-burstable-podc7e80ade5741b95540d8bed18be0d5a2.slice. Aug 5 22:30:33.566602 kubelet[2179]: I0805 22:30:33.566579 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:33.566681 kubelet[2179]: I0805 22:30:33.566611 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:33.566681 kubelet[2179]: I0805 22:30:33.566631 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:33.566681 kubelet[2179]: I0805 22:30:33.566655 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7e80ade5741b95540d8bed18be0d5a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c7e80ade5741b95540d8bed18be0d5a2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:30:33.566681 kubelet[2179]: I0805 22:30:33.566676 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7e80ade5741b95540d8bed18be0d5a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c7e80ade5741b95540d8bed18be0d5a2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:30:33.566801 kubelet[2179]: I0805 22:30:33.566696 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7e80ade5741b95540d8bed18be0d5a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c7e80ade5741b95540d8bed18be0d5a2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:30:33.566801 kubelet[2179]: I0805 22:30:33.566718 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:33.566801 kubelet[2179]: I0805 22:30:33.566736 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:33.576037 systemd[1]: Created slice kubepods-burstable-pod471a108742c0b3658d07e3bda7ae5d17.slice - libcontainer container kubepods-burstable-pod471a108742c0b3658d07e3bda7ae5d17.slice. Aug 5 22:30:33.592401 systemd[1]: Created slice kubepods-burstable-pod3b0306f30b5bc847ed1d56b34a56bbaf.slice - libcontainer container kubepods-burstable-pod3b0306f30b5bc847ed1d56b34a56bbaf.slice. Aug 5 22:30:33.635621 kubelet[2179]: W0805 22:30:33.635536 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:33.635621 kubelet[2179]: E0805 22:30:33.635617 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:33.662281 kubelet[2179]: I0805 22:30:33.662238 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:30:33.662731 kubelet[2179]: E0805 22:30:33.662680 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Aug 5 22:30:33.667993 kubelet[2179]: I0805 22:30:33.667911 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b0306f30b5bc847ed1d56b34a56bbaf-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3b0306f30b5bc847ed1d56b34a56bbaf\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:30:33.874224 kubelet[2179]: E0805 22:30:33.874182 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:33.874981 containerd[1448]: time="2024-08-05T22:30:33.874929367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c7e80ade5741b95540d8bed18be0d5a2,Namespace:kube-system,Attempt:0,}" Aug 5 22:30:33.888232 kubelet[2179]: E0805 22:30:33.888184 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:33.888720 containerd[1448]: time="2024-08-05T22:30:33.888661792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:471a108742c0b3658d07e3bda7ae5d17,Namespace:kube-system,Attempt:0,}" Aug 5 22:30:33.895945 kubelet[2179]: E0805 22:30:33.895918 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:33.896322 containerd[1448]: time="2024-08-05T22:30:33.896287461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3b0306f30b5bc847ed1d56b34a56bbaf,Namespace:kube-system,Attempt:0,}" Aug 5 22:30:34.134825 kubelet[2179]: E0805 22:30:34.134721 2179 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:34.532284 kubelet[2179]: W0805 22:30:34.532132 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:34.532284 kubelet[2179]: E0805 22:30:34.532188 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:34.883955 kubelet[2179]: W0805 22:30:34.883803 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:34.883955 kubelet[2179]: E0805 22:30:34.883882 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:35.162283 kubelet[2179]: E0805 22:30:35.162042 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="3.2s" Aug 5 22:30:35.272495 kubelet[2179]: I0805 22:30:35.271821 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:30:35.272495 kubelet[2179]: E0805 22:30:35.272243 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Aug 5 22:30:35.561501 kubelet[2179]: W0805 22:30:35.561186 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:35.561501 kubelet[2179]: E0805 22:30:35.561238 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:35.796113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285735516.mount: Deactivated successfully. Aug 5 22:30:35.829251 containerd[1448]: time="2024-08-05T22:30:35.828677761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:30:35.839113 containerd[1448]: time="2024-08-05T22:30:35.839012138Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:30:35.840783 containerd[1448]: time="2024-08-05T22:30:35.840675890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:30:35.841487 containerd[1448]: time="2024-08-05T22:30:35.841405710Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:30:35.844512 containerd[1448]: time="2024-08-05T22:30:35.844266779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:30:35.856586 containerd[1448]: time="2024-08-05T22:30:35.856375992Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:30:35.858280 containerd[1448]: time="2024-08-05T22:30:35.858109858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:30:35.868720 containerd[1448]: time="2024-08-05T22:30:35.865825571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:30:35.868720 containerd[1448]: time="2024-08-05T22:30:35.867252960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.992204092s" Aug 5 22:30:35.873062 containerd[1448]: time="2024-08-05T22:30:35.872990888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.976629785s" Aug 5 22:30:35.878544 containerd[1448]: time="2024-08-05T22:30:35.878248636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.989475579s" Aug 5 22:30:36.106098 kubelet[2179]: W0805 22:30:36.098608 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:36.106098 kubelet[2179]: E0805 22:30:36.098672 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.102:6443: connect: connection refused Aug 5 22:30:36.138061 containerd[1448]: time="2024-08-05T22:30:36.135040043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:30:36.138061 containerd[1448]: time="2024-08-05T22:30:36.135143070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:36.138061 containerd[1448]: time="2024-08-05T22:30:36.135190812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:30:36.138061 containerd[1448]: time="2024-08-05T22:30:36.135220079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:36.142856 containerd[1448]: time="2024-08-05T22:30:36.140171447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:30:36.142856 containerd[1448]: time="2024-08-05T22:30:36.141718140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:30:36.142856 containerd[1448]: time="2024-08-05T22:30:36.141776713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:36.142856 containerd[1448]: time="2024-08-05T22:30:36.141799547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:30:36.142856 containerd[1448]: time="2024-08-05T22:30:36.141815547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:36.142856 containerd[1448]: time="2024-08-05T22:30:36.140251781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:36.148484 containerd[1448]: time="2024-08-05T22:30:36.144677362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:30:36.148484 containerd[1448]: time="2024-08-05T22:30:36.144720904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:36.224258 systemd[1]: Started cri-containerd-8da68e57f45bbe7ecdd101283c225dd71dc58d843d8c0a5252509d2f88cd64bd.scope - libcontainer container 8da68e57f45bbe7ecdd101283c225dd71dc58d843d8c0a5252509d2f88cd64bd. Aug 5 22:30:36.234734 systemd[1]: Started cri-containerd-9d475cb633886d532797d7c90cb29ef2070c817b5a7e3040546bde5180928e7f.scope - libcontainer container 9d475cb633886d532797d7c90cb29ef2070c817b5a7e3040546bde5180928e7f. Aug 5 22:30:36.243069 systemd[1]: Started cri-containerd-cdb7253b71a6a4407e74e70ec162c0df09178c16af5cca30e39c40415dcadb7f.scope - libcontainer container cdb7253b71a6a4407e74e70ec162c0df09178c16af5cca30e39c40415dcadb7f. Aug 5 22:30:36.348667 containerd[1448]: time="2024-08-05T22:30:36.348569130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c7e80ade5741b95540d8bed18be0d5a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdb7253b71a6a4407e74e70ec162c0df09178c16af5cca30e39c40415dcadb7f\"" Aug 5 22:30:36.355706 kubelet[2179]: E0805 22:30:36.355654 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:36.365071 containerd[1448]: time="2024-08-05T22:30:36.359907287Z" level=info msg="CreateContainer within sandbox \"cdb7253b71a6a4407e74e70ec162c0df09178c16af5cca30e39c40415dcadb7f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:30:36.365071 containerd[1448]: time="2024-08-05T22:30:36.361752934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:471a108742c0b3658d07e3bda7ae5d17,Namespace:kube-system,Attempt:0,} returns sandbox id \"8da68e57f45bbe7ecdd101283c225dd71dc58d843d8c0a5252509d2f88cd64bd\"" Aug 5 22:30:36.365259 kubelet[2179]: E0805 22:30:36.365049 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:36.367966 containerd[1448]: time="2024-08-05T22:30:36.367539613Z" level=info msg="CreateContainer within sandbox \"8da68e57f45bbe7ecdd101283c225dd71dc58d843d8c0a5252509d2f88cd64bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:30:36.387033 containerd[1448]: time="2024-08-05T22:30:36.386933658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3b0306f30b5bc847ed1d56b34a56bbaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d475cb633886d532797d7c90cb29ef2070c817b5a7e3040546bde5180928e7f\"" Aug 5 22:30:36.391220 kubelet[2179]: E0805 22:30:36.390755 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:36.403921 containerd[1448]: time="2024-08-05T22:30:36.402263335Z" level=info msg="CreateContainer within sandbox \"9d475cb633886d532797d7c90cb29ef2070c817b5a7e3040546bde5180928e7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:30:36.429793 containerd[1448]: time="2024-08-05T22:30:36.429694462Z" level=info msg="CreateContainer within sandbox \"8da68e57f45bbe7ecdd101283c225dd71dc58d843d8c0a5252509d2f88cd64bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4db668472a0a5a88ff1fd914d2f49a93a2717286901be141609cac27ab1fe32c\"" Aug 5 22:30:36.431131 containerd[1448]: time="2024-08-05T22:30:36.430809598Z" level=info msg="StartContainer for \"4db668472a0a5a88ff1fd914d2f49a93a2717286901be141609cac27ab1fe32c\"" Aug 5 22:30:36.497626 containerd[1448]: time="2024-08-05T22:30:36.497573638Z" level=info msg="CreateContainer within sandbox \"cdb7253b71a6a4407e74e70ec162c0df09178c16af5cca30e39c40415dcadb7f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7c1408fa352964cba9016c87ed51a003e13ec09f757b02efdde254c4d34760eb\"" Aug 5 22:30:36.498836 systemd[1]: Started cri-containerd-4db668472a0a5a88ff1fd914d2f49a93a2717286901be141609cac27ab1fe32c.scope - libcontainer container 4db668472a0a5a88ff1fd914d2f49a93a2717286901be141609cac27ab1fe32c. Aug 5 22:30:36.501341 containerd[1448]: time="2024-08-05T22:30:36.501304669Z" level=info msg="StartContainer for \"7c1408fa352964cba9016c87ed51a003e13ec09f757b02efdde254c4d34760eb\"" Aug 5 22:30:36.552973 containerd[1448]: time="2024-08-05T22:30:36.552880223Z" level=info msg="CreateContainer within sandbox \"9d475cb633886d532797d7c90cb29ef2070c817b5a7e3040546bde5180928e7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"389bd41dca63aabcabf92a536f6f3ada116b6922ab5117f55d6ef6b4b7760ae6\"" Aug 5 22:30:36.559392 containerd[1448]: time="2024-08-05T22:30:36.559274557Z" level=info msg="StartContainer for \"389bd41dca63aabcabf92a536f6f3ada116b6922ab5117f55d6ef6b4b7760ae6\"" Aug 5 22:30:36.592968 systemd[1]: Started cri-containerd-7c1408fa352964cba9016c87ed51a003e13ec09f757b02efdde254c4d34760eb.scope - libcontainer container 7c1408fa352964cba9016c87ed51a003e13ec09f757b02efdde254c4d34760eb. Aug 5 22:30:36.644065 systemd[1]: Started cri-containerd-389bd41dca63aabcabf92a536f6f3ada116b6922ab5117f55d6ef6b4b7760ae6.scope - libcontainer container 389bd41dca63aabcabf92a536f6f3ada116b6922ab5117f55d6ef6b4b7760ae6. Aug 5 22:30:36.651457 containerd[1448]: time="2024-08-05T22:30:36.651196437Z" level=info msg="StartContainer for \"4db668472a0a5a88ff1fd914d2f49a93a2717286901be141609cac27ab1fe32c\" returns successfully" Aug 5 22:30:36.751044 containerd[1448]: time="2024-08-05T22:30:36.749816513Z" level=info msg="StartContainer for \"7c1408fa352964cba9016c87ed51a003e13ec09f757b02efdde254c4d34760eb\" returns successfully" Aug 5 22:30:36.788043 containerd[1448]: time="2024-08-05T22:30:36.787390477Z" level=info msg="StartContainer for \"389bd41dca63aabcabf92a536f6f3ada116b6922ab5117f55d6ef6b4b7760ae6\" returns successfully" Aug 5 22:30:37.274150 kubelet[2179]: E0805 22:30:37.274084 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:37.284516 kubelet[2179]: E0805 22:30:37.284467 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:37.290151 kubelet[2179]: E0805 22:30:37.290090 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:38.327859 kubelet[2179]: E0805 22:30:38.296234 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:38.475031 kubelet[2179]: I0805 22:30:38.474623 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:30:39.952667 kubelet[2179]: E0805 22:30:39.951064 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:40.270041 kubelet[2179]: E0805 22:30:40.264766 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:40.554369 kubelet[2179]: E0805 22:30:40.553708 2179 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 5 22:30:40.689747 kubelet[2179]: E0805 22:30:40.689509 2179 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17e8f5b8f1905078 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:30:32.144769144 +0000 UTC m=+0.551701533,LastTimestamp:2024-08-05 22:30:32.144769144 +0000 UTC m=+0.551701533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:30:40.749866 kubelet[2179]: I0805 22:30:40.749799 2179 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:30:40.776256 kubelet[2179]: E0805 22:30:40.770616 2179 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17e8f5b8f28b02bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:30:32.161198781 +0000 UTC m=+0.568131151,LastTimestamp:2024-08-05 22:30:32.161198781 +0000 UTC m=+0.568131151,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:30:40.803168 kubelet[2179]: E0805 22:30:40.803109 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:40.852862 kubelet[2179]: E0805 22:30:40.852527 2179 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17e8f5b8f359bf1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:30:32.17474742 +0000 UTC m=+0.581679789,LastTimestamp:2024-08-05 22:30:32.17474742 +0000 UTC m=+0.581679789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:30:40.904292 kubelet[2179]: E0805 22:30:40.904213 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.007707 kubelet[2179]: E0805 22:30:41.007609 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.108541 kubelet[2179]: E0805 22:30:41.108335 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.209032 kubelet[2179]: E0805 22:30:41.208967 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.310851 kubelet[2179]: E0805 22:30:41.309632 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.410646 kubelet[2179]: E0805 22:30:41.410108 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.513899 kubelet[2179]: E0805 22:30:41.513687 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.629980 kubelet[2179]: E0805 22:30:41.629914 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.732561 kubelet[2179]: E0805 22:30:41.730842 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:41.748067 update_engine[1435]: I0805 22:30:41.747951 1435 update_attempter.cc:509] Updating boot flags... Aug 5 22:30:41.840501 kubelet[2179]: E0805 22:30:41.832893 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:42.024000 kubelet[2179]: E0805 22:30:42.020122 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:42.040518 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2461) Aug 5 22:30:42.120923 kubelet[2179]: E0805 22:30:42.120831 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:42.224608 kubelet[2179]: E0805 22:30:42.224536 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:42.233481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2462) Aug 5 22:30:42.325489 kubelet[2179]: E0805 22:30:42.325294 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:42.426290 kubelet[2179]: E0805 22:30:42.426125 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:30:43.274615 kubelet[2179]: I0805 22:30:43.274487 2179 apiserver.go:52] "Watching apiserver" Aug 5 22:30:43.360847 kubelet[2179]: I0805 22:30:43.359039 2179 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:30:44.528226 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-7.scope)... Aug 5 22:30:44.528250 systemd[1]: Reloading... Aug 5 22:30:44.727686 zram_generator::config[2509]: No configuration found. Aug 5 22:30:45.031571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:30:45.173687 systemd[1]: Reloading finished in 644 ms. Aug 5 22:30:45.246235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:30:45.267707 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:30:45.268318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:30:45.268513 systemd[1]: kubelet.service: Consumed 1.329s CPU time, 120.7M memory peak, 0B memory swap peak. Aug 5 22:30:45.284191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:30:45.566503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:30:45.582833 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:30:45.673011 kubelet[2551]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:30:45.673011 kubelet[2551]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:30:45.673011 kubelet[2551]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:30:45.673011 kubelet[2551]: I0805 22:30:45.672080 2551 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:30:45.681895 kubelet[2551]: I0805 22:30:45.681022 2551 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:30:45.681895 kubelet[2551]: I0805 22:30:45.681058 2551 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:30:45.681895 kubelet[2551]: I0805 22:30:45.681305 2551 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:30:45.683849 kubelet[2551]: I0805 22:30:45.682655 2551 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:30:45.684518 kubelet[2551]: I0805 22:30:45.684229 2551 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:30:45.734012 sudo[2566]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 5 22:30:45.734417 sudo[2566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 5 22:30:45.815314 kubelet[2551]: I0805 22:30:45.814751 2551 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:30:45.815314 kubelet[2551]: I0805 22:30:45.815057 2551 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:30:45.815314 kubelet[2551]: I0805 22:30:45.815093 2551 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:30:45.815675 kubelet[2551]: I0805 22:30:45.815341 2551 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:30:45.815675 kubelet[2551]: I0805 22:30:45.815355 2551 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:30:45.815675 kubelet[2551]: I0805 22:30:45.815413 2551 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:30:45.820804 kubelet[2551]: I0805 22:30:45.819884 2551 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:30:45.820804 kubelet[2551]: I0805 22:30:45.819924 2551 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:30:45.820804 kubelet[2551]: I0805 22:30:45.819955 2551 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:30:45.820804 kubelet[2551]: I0805 22:30:45.819980 2551 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:30:45.826139 kubelet[2551]: I0805 22:30:45.826081 2551 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:30:45.826905 kubelet[2551]: I0805 22:30:45.826343 2551 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:30:45.851636 kubelet[2551]: I0805 22:30:45.851591 2551 server.go:1264] "Started kubelet" Aug 5 22:30:45.854920 kubelet[2551]: I0805 22:30:45.854846 2551 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:30:45.855361 kubelet[2551]: I0805 22:30:45.855333 2551 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:30:45.855420 kubelet[2551]: I0805 22:30:45.855389 2551 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:30:45.858116 kubelet[2551]: I0805 22:30:45.858019 2551 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:30:45.861201 kubelet[2551]: I0805 22:30:45.861175 2551 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:30:45.862992 kubelet[2551]: I0805 22:30:45.862964 2551 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:30:45.863071 kubelet[2551]: I0805 22:30:45.863058 2551 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:30:45.863303 kubelet[2551]: I0805 22:30:45.863281 2551 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:30:45.871415 kubelet[2551]: I0805 22:30:45.871380 2551 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:30:45.872900 kubelet[2551]: I0805 22:30:45.872331 2551 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:30:45.881479 kubelet[2551]: I0805 22:30:45.878569 2551 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:30:45.886807 kubelet[2551]: E0805 22:30:45.886756 2551 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:30:45.932825 kubelet[2551]: I0805 22:30:45.932400 2551 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:30:45.939244 kubelet[2551]: I0805 22:30:45.938754 2551 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:30:45.939244 kubelet[2551]: I0805 22:30:45.938825 2551 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:30:45.939244 kubelet[2551]: I0805 22:30:45.938891 2551 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:30:45.939244 kubelet[2551]: E0805 22:30:45.938971 2551 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:30:45.972592 kubelet[2551]: I0805 22:30:45.969843 2551 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:30:45.988005 kubelet[2551]: I0805 22:30:45.987678 2551 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:30:45.988005 kubelet[2551]: I0805 22:30:45.987700 2551 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:30:45.988005 kubelet[2551]: I0805 22:30:45.987728 2551 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:30:45.988005 kubelet[2551]: I0805 22:30:45.987911 2551 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:30:45.988005 kubelet[2551]: I0805 22:30:45.987923 2551 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:30:45.988005 kubelet[2551]: I0805 22:30:45.987943 2551 policy_none.go:49] "None policy: Start" Aug 5 22:30:45.993150 kubelet[2551]: I0805 22:30:45.988848 2551 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:30:45.993150 kubelet[2551]: I0805 22:30:45.988871 2551 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:30:45.993150 kubelet[2551]: I0805 22:30:45.989048 2551 state_mem.go:75] "Updated machine memory state" Aug 5 22:30:46.002344 kubelet[2551]: I0805 22:30:46.001813 2551 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Aug 5 22:30:46.002344 kubelet[2551]: I0805 22:30:46.001919 2551 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:30:46.018318 kubelet[2551]: I0805 22:30:46.014055 2551 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:30:46.018318 kubelet[2551]: I0805 22:30:46.014310 2551 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:30:46.018318 kubelet[2551]: I0805 22:30:46.014468 2551 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:30:46.039780 kubelet[2551]: I0805 22:30:46.039737 2551 topology_manager.go:215] "Topology Admit Handler" podUID="c7e80ade5741b95540d8bed18be0d5a2" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:30:46.040081 kubelet[2551]: I0805 22:30:46.040061 2551 topology_manager.go:215] "Topology Admit Handler" podUID="471a108742c0b3658d07e3bda7ae5d17" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:30:46.040556 kubelet[2551]: I0805 22:30:46.040270 2551 topology_manager.go:215] "Topology Admit Handler" podUID="3b0306f30b5bc847ed1d56b34a56bbaf" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:30:46.068526 kubelet[2551]: I0805 22:30:46.066349 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:46.068526 kubelet[2551]: I0805 22:30:46.066388 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7e80ade5741b95540d8bed18be0d5a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c7e80ade5741b95540d8bed18be0d5a2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:30:46.068526 kubelet[2551]: I0805 22:30:46.066417 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7e80ade5741b95540d8bed18be0d5a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c7e80ade5741b95540d8bed18be0d5a2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:30:46.068526 kubelet[2551]: I0805 22:30:46.066461 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:46.068526 kubelet[2551]: I0805 22:30:46.066484 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:46.070110 kubelet[2551]: I0805 22:30:46.066508 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:46.070110 kubelet[2551]: I0805 22:30:46.066530 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7e80ade5741b95540d8bed18be0d5a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c7e80ade5741b95540d8bed18be0d5a2\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:30:46.070110 kubelet[2551]: I0805 22:30:46.066550 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/471a108742c0b3658d07e3bda7ae5d17-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"471a108742c0b3658d07e3bda7ae5d17\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:30:46.070110 kubelet[2551]: I0805 22:30:46.066572 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b0306f30b5bc847ed1d56b34a56bbaf-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3b0306f30b5bc847ed1d56b34a56bbaf\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:30:46.382610 kubelet[2551]: E0805 22:30:46.380544 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:46.386821 kubelet[2551]: E0805 22:30:46.385367 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:46.386821 kubelet[2551]: E0805 22:30:46.385927 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:46.828694 kubelet[2551]: I0805 22:30:46.825076 2551 apiserver.go:52] "Watching apiserver" Aug 5 22:30:46.958634 kubelet[2551]: I0805 22:30:46.958265 2551 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:30:46.981244 kubelet[2551]: E0805 22:30:46.980969 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:46.982484 kubelet[2551]: E0805 22:30:46.981996 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:46.982484 kubelet[2551]: E0805 22:30:46.982411 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:47.192994 kubelet[2551]: I0805 22:30:47.192866 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.192842309 podStartE2EDuration="1.192842309s" podCreationTimestamp="2024-08-05 22:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:30:47.189969543 +0000 UTC m=+1.596431283" watchObservedRunningTime="2024-08-05 22:30:47.192842309 +0000 UTC m=+1.599304049" Aug 5 22:30:47.192994 kubelet[2551]: I0805 22:30:47.193003 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.193000018 podStartE2EDuration="1.193000018s" podCreationTimestamp="2024-08-05 22:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:30:47.175075989 +0000 UTC m=+1.581537729" watchObservedRunningTime="2024-08-05 22:30:47.193000018 +0000 UTC m=+1.599461758" Aug 5 22:30:47.226754 kubelet[2551]: I0805 22:30:47.223511 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.223488149 podStartE2EDuration="1.223488149s" podCreationTimestamp="2024-08-05 22:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:30:47.221431834 +0000 UTC m=+1.627893584" watchObservedRunningTime="2024-08-05 22:30:47.223488149 +0000 UTC m=+1.629949889" Aug 5 22:30:47.581337 sudo[2566]: pam_unix(sudo:session): session closed for user root Aug 5 22:30:47.984477 kubelet[2551]: E0805 22:30:47.983649 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:49.668020 sudo[1628]: pam_unix(sudo:session): session closed for user root Aug 5 22:30:49.670429 sshd[1625]: pam_unix(sshd:session): session closed for user core Aug 5 22:30:49.675308 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:50246.service: Deactivated successfully. Aug 5 22:30:49.677713 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:30:49.677937 systemd[1]: session-7.scope: Consumed 6.540s CPU time, 143.1M memory peak, 0B memory swap peak. Aug 5 22:30:49.678530 systemd-logind[1431]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:30:49.679373 systemd-logind[1431]: Removed session 7. Aug 5 22:30:50.700795 kubelet[2551]: E0805 22:30:50.700737 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:50.988785 kubelet[2551]: E0805 22:30:50.988641 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:53.055670 kubelet[2551]: E0805 22:30:53.055631 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:53.992416 kubelet[2551]: E0805 22:30:53.992364 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:54.358659 kubelet[2551]: E0805 22:30:54.358596 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:54.993366 kubelet[2551]: E0805 22:30:54.993303 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:54.993571 kubelet[2551]: E0805 22:30:54.993537 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:57.939123 kubelet[2551]: I0805 22:30:57.939071 2551 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:30:57.939701 containerd[1448]: time="2024-08-05T22:30:57.939421374Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:30:57.940083 kubelet[2551]: I0805 22:30:57.939883 2551 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:30:58.790356 kubelet[2551]: I0805 22:30:58.790270 2551 topology_manager.go:215] "Topology Admit Handler" podUID="6927b3e0-554a-44a8-a3bf-e0b49da768de" podNamespace="kube-system" podName="cilium-xpwhx" Aug 5 22:30:58.791289 kubelet[2551]: I0805 22:30:58.791245 2551 topology_manager.go:215] "Topology Admit Handler" podUID="1deb0319-f80c-41b9-b9fe-06308fcfba04" podNamespace="kube-system" podName="kube-proxy-grjdw" Aug 5 22:30:58.804151 systemd[1]: Created slice kubepods-burstable-pod6927b3e0_554a_44a8_a3bf_e0b49da768de.slice - libcontainer container kubepods-burstable-pod6927b3e0_554a_44a8_a3bf_e0b49da768de.slice. Aug 5 22:30:58.813420 systemd[1]: Created slice kubepods-besteffort-pod1deb0319_f80c_41b9_b9fe_06308fcfba04.slice - libcontainer container kubepods-besteffort-pod1deb0319_f80c_41b9_b9fe_06308fcfba04.slice. Aug 5 22:30:58.927812 kubelet[2551]: I0805 22:30:58.927727 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cni-path\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.927812 kubelet[2551]: I0805 22:30:58.927793 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-host-proc-sys-kernel\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928022 kubelet[2551]: I0805 22:30:58.927833 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-lib-modules\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928022 kubelet[2551]: I0805 22:30:58.927882 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-hostproc\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928022 kubelet[2551]: I0805 22:30:58.927903 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-cgroup\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928022 kubelet[2551]: I0805 22:30:58.927976 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-etc-cni-netd\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928128 kubelet[2551]: I0805 22:30:58.928042 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6927b3e0-554a-44a8-a3bf-e0b49da768de-clustermesh-secrets\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928128 kubelet[2551]: I0805 22:30:58.928094 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-run\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928128 kubelet[2551]: I0805 22:30:58.928117 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-config-path\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928204 kubelet[2551]: I0805 22:30:58.928145 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1deb0319-f80c-41b9-b9fe-06308fcfba04-xtables-lock\") pod \"kube-proxy-grjdw\" (UID: \"1deb0319-f80c-41b9-b9fe-06308fcfba04\") " pod="kube-system/kube-proxy-grjdw" Aug 5 22:30:58.928204 kubelet[2551]: I0805 22:30:58.928167 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-host-proc-sys-net\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928204 kubelet[2551]: I0805 22:30:58.928192 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-bpf-maps\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928303 kubelet[2551]: I0805 22:30:58.928214 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1deb0319-f80c-41b9-b9fe-06308fcfba04-kube-proxy\") pod \"kube-proxy-grjdw\" (UID: \"1deb0319-f80c-41b9-b9fe-06308fcfba04\") " pod="kube-system/kube-proxy-grjdw" Aug 5 22:30:58.928303 kubelet[2551]: I0805 22:30:58.928252 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd622\" (UniqueName: \"kubernetes.io/projected/1deb0319-f80c-41b9-b9fe-06308fcfba04-kube-api-access-pd622\") pod \"kube-proxy-grjdw\" (UID: \"1deb0319-f80c-41b9-b9fe-06308fcfba04\") " pod="kube-system/kube-proxy-grjdw" Aug 5 22:30:58.928303 kubelet[2551]: I0805 22:30:58.928291 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-xtables-lock\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928401 kubelet[2551]: I0805 22:30:58.928317 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6927b3e0-554a-44a8-a3bf-e0b49da768de-hubble-tls\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928401 kubelet[2551]: I0805 22:30:58.928353 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjsp\" (UniqueName: \"kubernetes.io/projected/6927b3e0-554a-44a8-a3bf-e0b49da768de-kube-api-access-2mjsp\") pod \"cilium-xpwhx\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " pod="kube-system/cilium-xpwhx" Aug 5 22:30:58.928401 kubelet[2551]: I0805 22:30:58.928375 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1deb0319-f80c-41b9-b9fe-06308fcfba04-lib-modules\") pod \"kube-proxy-grjdw\" (UID: \"1deb0319-f80c-41b9-b9fe-06308fcfba04\") " pod="kube-system/kube-proxy-grjdw" Aug 5 22:30:59.002187 kubelet[2551]: I0805 22:30:59.001889 2551 topology_manager.go:215] "Topology Admit Handler" podUID="9b41f58c-af9d-4366-a224-8d9da879b256" podNamespace="kube-system" podName="cilium-operator-599987898-2clm7" Aug 5 22:30:59.010493 systemd[1]: Created slice kubepods-besteffort-pod9b41f58c_af9d_4366_a224_8d9da879b256.slice - libcontainer container kubepods-besteffort-pod9b41f58c_af9d_4366_a224_8d9da879b256.slice. Aug 5 22:30:59.109087 kubelet[2551]: E0805 22:30:59.109024 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:59.109868 containerd[1448]: time="2024-08-05T22:30:59.109795533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpwhx,Uid:6927b3e0-554a-44a8-a3bf-e0b49da768de,Namespace:kube-system,Attempt:0,}" Aug 5 22:30:59.126887 kubelet[2551]: E0805 22:30:59.126831 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:59.127493 containerd[1448]: time="2024-08-05T22:30:59.127408303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grjdw,Uid:1deb0319-f80c-41b9-b9fe-06308fcfba04,Namespace:kube-system,Attempt:0,}" Aug 5 22:30:59.130226 kubelet[2551]: I0805 22:30:59.130169 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b41f58c-af9d-4366-a224-8d9da879b256-cilium-config-path\") pod \"cilium-operator-599987898-2clm7\" (UID: \"9b41f58c-af9d-4366-a224-8d9da879b256\") " pod="kube-system/cilium-operator-599987898-2clm7" Aug 5 22:30:59.130226 kubelet[2551]: I0805 22:30:59.130212 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf9kq\" (UniqueName: \"kubernetes.io/projected/9b41f58c-af9d-4366-a224-8d9da879b256-kube-api-access-gf9kq\") pod \"cilium-operator-599987898-2clm7\" (UID: \"9b41f58c-af9d-4366-a224-8d9da879b256\") " pod="kube-system/cilium-operator-599987898-2clm7" Aug 5 22:30:59.313109 kubelet[2551]: E0805 22:30:59.313041 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:59.313637 containerd[1448]: time="2024-08-05T22:30:59.313592484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2clm7,Uid:9b41f58c-af9d-4366-a224-8d9da879b256,Namespace:kube-system,Attempt:0,}" Aug 5 22:30:59.481538 containerd[1448]: time="2024-08-05T22:30:59.481131734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:30:59.481538 containerd[1448]: time="2024-08-05T22:30:59.481259105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:59.481538 containerd[1448]: time="2024-08-05T22:30:59.481305062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:30:59.481538 containerd[1448]: time="2024-08-05T22:30:59.481326983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:59.490125 containerd[1448]: time="2024-08-05T22:30:59.489914184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:30:59.490125 containerd[1448]: time="2024-08-05T22:30:59.489978876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:59.490125 containerd[1448]: time="2024-08-05T22:30:59.490001188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:30:59.490125 containerd[1448]: time="2024-08-05T22:30:59.490017049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:59.493241 containerd[1448]: time="2024-08-05T22:30:59.492910196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:30:59.493825 containerd[1448]: time="2024-08-05T22:30:59.493741889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:59.493825 containerd[1448]: time="2024-08-05T22:30:59.493799988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:30:59.493825 containerd[1448]: time="2024-08-05T22:30:59.493818053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:30:59.512594 systemd[1]: Started cri-containerd-036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd.scope - libcontainer container 036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd. Aug 5 22:30:59.517316 systemd[1]: Started cri-containerd-71cef7528254360faca2a6dbac1a1da389f48c11eeda18929cd6a67086e42005.scope - libcontainer container 71cef7528254360faca2a6dbac1a1da389f48c11eeda18929cd6a67086e42005. Aug 5 22:30:59.519825 systemd[1]: Started cri-containerd-930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2.scope - libcontainer container 930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2. Aug 5 22:30:59.548756 containerd[1448]: time="2024-08-05T22:30:59.548704645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpwhx,Uid:6927b3e0-554a-44a8-a3bf-e0b49da768de,Namespace:kube-system,Attempt:0,} returns sandbox id \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\"" Aug 5 22:30:59.549621 kubelet[2551]: E0805 22:30:59.549596 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:59.551462 containerd[1448]: time="2024-08-05T22:30:59.551404889Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 5 22:30:59.556737 containerd[1448]: time="2024-08-05T22:30:59.556697903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grjdw,Uid:1deb0319-f80c-41b9-b9fe-06308fcfba04,Namespace:kube-system,Attempt:0,} returns sandbox id \"71cef7528254360faca2a6dbac1a1da389f48c11eeda18929cd6a67086e42005\"" Aug 5 22:30:59.559567 kubelet[2551]: E0805 22:30:59.559431 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:59.562191 containerd[1448]: time="2024-08-05T22:30:59.562158935Z" level=info msg="CreateContainer within sandbox \"71cef7528254360faca2a6dbac1a1da389f48c11eeda18929cd6a67086e42005\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:30:59.574598 containerd[1448]: time="2024-08-05T22:30:59.574548683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2clm7,Uid:9b41f58c-af9d-4366-a224-8d9da879b256,Namespace:kube-system,Attempt:0,} returns sandbox id \"930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2\"" Aug 5 22:30:59.575221 kubelet[2551]: E0805 22:30:59.575188 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:30:59.616400 containerd[1448]: time="2024-08-05T22:30:59.616326699Z" level=info msg="CreateContainer within sandbox \"71cef7528254360faca2a6dbac1a1da389f48c11eeda18929cd6a67086e42005\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"335fbe9260407a95da4625b735b61b58a4cbc570176f067a9697972bc40720a0\"" Aug 5 22:30:59.617211 containerd[1448]: time="2024-08-05T22:30:59.617178930Z" level=info msg="StartContainer for \"335fbe9260407a95da4625b735b61b58a4cbc570176f067a9697972bc40720a0\"" Aug 5 22:30:59.653680 systemd[1]: Started cri-containerd-335fbe9260407a95da4625b735b61b58a4cbc570176f067a9697972bc40720a0.scope - libcontainer container 335fbe9260407a95da4625b735b61b58a4cbc570176f067a9697972bc40720a0. Aug 5 22:30:59.807118 containerd[1448]: time="2024-08-05T22:30:59.806967744Z" level=info msg="StartContainer for \"335fbe9260407a95da4625b735b61b58a4cbc570176f067a9697972bc40720a0\" returns successfully" Aug 5 22:31:00.003977 kubelet[2551]: E0805 22:31:00.003946 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:05.586193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217636705.mount: Deactivated successfully. Aug 5 22:31:10.157183 containerd[1448]: time="2024-08-05T22:31:10.157087105Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:31:10.179918 containerd[1448]: time="2024-08-05T22:31:10.179801970Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735315" Aug 5 22:31:10.204373 containerd[1448]: time="2024-08-05T22:31:10.204296301Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:31:10.206833 containerd[1448]: time="2024-08-05T22:31:10.206781078Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.655332708s" Aug 5 22:31:10.206833 containerd[1448]: time="2024-08-05T22:31:10.206826484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 5 22:31:10.209887 containerd[1448]: time="2024-08-05T22:31:10.209683273Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 5 22:31:10.209887 containerd[1448]: time="2024-08-05T22:31:10.209723709Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 5 22:31:10.239177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539848634.mount: Deactivated successfully. Aug 5 22:31:10.243667 containerd[1448]: time="2024-08-05T22:31:10.243585520Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\"" Aug 5 22:31:10.244288 containerd[1448]: time="2024-08-05T22:31:10.244242118Z" level=info msg="StartContainer for \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\"" Aug 5 22:31:10.282672 systemd[1]: Started cri-containerd-81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af.scope - libcontainer container 81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af. Aug 5 22:31:10.315338 containerd[1448]: time="2024-08-05T22:31:10.315282928Z" level=info msg="StartContainer for \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\" returns successfully" Aug 5 22:31:10.331676 systemd[1]: cri-containerd-81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af.scope: Deactivated successfully. Aug 5 22:31:11.044740 kubelet[2551]: E0805 22:31:11.044703 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:11.236108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af-rootfs.mount: Deactivated successfully. Aug 5 22:31:11.266131 kubelet[2551]: I0805 22:31:11.265832 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-grjdw" podStartSLOduration=13.26581504 podStartE2EDuration="13.26581504s" podCreationTimestamp="2024-08-05 22:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:31:00.012966331 +0000 UTC m=+14.419428071" watchObservedRunningTime="2024-08-05 22:31:11.26581504 +0000 UTC m=+25.672276780" Aug 5 22:31:11.707155 containerd[1448]: time="2024-08-05T22:31:11.707078356Z" level=info msg="shim disconnected" id=81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af namespace=k8s.io Aug 5 22:31:11.707155 containerd[1448]: time="2024-08-05T22:31:11.707147466Z" level=warning msg="cleaning up after shim disconnected" id=81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af namespace=k8s.io Aug 5 22:31:11.707155 containerd[1448]: time="2024-08-05T22:31:11.707158928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:31:12.047398 kubelet[2551]: E0805 22:31:12.047233 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:12.049254 containerd[1448]: time="2024-08-05T22:31:12.049196105Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 5 22:31:12.102823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441008303.mount: Deactivated successfully. Aug 5 22:31:12.166877 containerd[1448]: time="2024-08-05T22:31:12.166812182Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\"" Aug 5 22:31:12.167599 containerd[1448]: time="2024-08-05T22:31:12.167551335Z" level=info msg="StartContainer for \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\"" Aug 5 22:31:12.205582 systemd[1]: Started cri-containerd-1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6.scope - libcontainer container 1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6. Aug 5 22:31:12.278849 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:31:12.279119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:31:12.279196 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:31:12.284805 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:31:12.285013 systemd[1]: cri-containerd-1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6.scope: Deactivated successfully. Aug 5 22:31:12.289657 containerd[1448]: time="2024-08-05T22:31:12.289594991Z" level=info msg="StartContainer for \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\" returns successfully" Aug 5 22:31:12.307761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6-rootfs.mount: Deactivated successfully. Aug 5 22:31:12.319075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:31:12.322912 containerd[1448]: time="2024-08-05T22:31:12.322866754Z" level=info msg="shim disconnected" id=1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6 namespace=k8s.io Aug 5 22:31:12.323027 containerd[1448]: time="2024-08-05T22:31:12.322913081Z" level=warning msg="cleaning up after shim disconnected" id=1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6 namespace=k8s.io Aug 5 22:31:12.323027 containerd[1448]: time="2024-08-05T22:31:12.322921988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:31:13.051258 kubelet[2551]: E0805 22:31:13.051217 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:13.053265 containerd[1448]: time="2024-08-05T22:31:13.053220729Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 5 22:31:13.676653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836153069.mount: Deactivated successfully. Aug 5 22:31:14.190817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307123674.mount: Deactivated successfully. Aug 5 22:31:15.706822 containerd[1448]: time="2024-08-05T22:31:15.706691613Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\"" Aug 5 22:31:15.708517 containerd[1448]: time="2024-08-05T22:31:15.707727445Z" level=info msg="StartContainer for \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\"" Aug 5 22:31:15.752647 systemd[1]: Started cri-containerd-46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac.scope - libcontainer container 46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac. Aug 5 22:31:15.810613 systemd[1]: cri-containerd-46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac.scope: Deactivated successfully. Aug 5 22:31:16.175660 containerd[1448]: time="2024-08-05T22:31:16.175518147Z" level=info msg="StartContainer for \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\" returns successfully" Aug 5 22:31:16.185971 kubelet[2551]: E0805 22:31:16.185738 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:16.200299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac-rootfs.mount: Deactivated successfully. Aug 5 22:31:16.586793 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:58720.service - OpenSSH per-connection server daemon (10.0.0.1:58720). Aug 5 22:31:16.654788 sshd[3116]: Accepted publickey for core from 10.0.0.1 port 58720 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:16.656736 sshd[3116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:16.661630 systemd-logind[1431]: New session 8 of user core. Aug 5 22:31:16.671580 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:31:16.791874 containerd[1448]: time="2024-08-05T22:31:16.791790557Z" level=info msg="shim disconnected" id=46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac namespace=k8s.io Aug 5 22:31:16.791874 containerd[1448]: time="2024-08-05T22:31:16.791854107Z" level=warning msg="cleaning up after shim disconnected" id=46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac namespace=k8s.io Aug 5 22:31:16.791874 containerd[1448]: time="2024-08-05T22:31:16.791863204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:31:16.936567 sshd[3116]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:16.944983 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:58720.service: Deactivated successfully. Aug 5 22:31:16.950794 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:31:16.954657 systemd-logind[1431]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:31:16.957405 systemd-logind[1431]: Removed session 8. Aug 5 22:31:17.190219 kubelet[2551]: E0805 22:31:17.190091 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:17.192942 containerd[1448]: time="2024-08-05T22:31:17.192793862Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 5 22:31:17.248383 containerd[1448]: time="2024-08-05T22:31:17.248320982Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\"" Aug 5 22:31:17.248994 containerd[1448]: time="2024-08-05T22:31:17.248958253Z" level=info msg="StartContainer for \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\"" Aug 5 22:31:17.286902 containerd[1448]: time="2024-08-05T22:31:17.286837491Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:31:17.288665 containerd[1448]: time="2024-08-05T22:31:17.288590985Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907209" Aug 5 22:31:17.289772 systemd[1]: Started cri-containerd-9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab.scope - libcontainer container 9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab. Aug 5 22:31:17.290158 containerd[1448]: time="2024-08-05T22:31:17.289763065Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:31:17.291327 containerd[1448]: time="2024-08-05T22:31:17.291268402Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.081539023s" Aug 5 22:31:17.291392 containerd[1448]: time="2024-08-05T22:31:17.291322734Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 5 22:31:17.296195 containerd[1448]: time="2024-08-05T22:31:17.296150674Z" level=info msg="CreateContainer within sandbox \"930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 5 22:31:17.317338 systemd[1]: cri-containerd-9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab.scope: Deactivated successfully. Aug 5 22:31:18.214264 containerd[1448]: time="2024-08-05T22:31:18.214210878Z" level=info msg="StartContainer for \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\" returns successfully" Aug 5 22:31:18.236767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab-rootfs.mount: Deactivated successfully. Aug 5 22:31:18.285877 containerd[1448]: time="2024-08-05T22:31:18.285794884Z" level=info msg="shim disconnected" id=9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab namespace=k8s.io Aug 5 22:31:18.285877 containerd[1448]: time="2024-08-05T22:31:18.285863292Z" level=warning msg="cleaning up after shim disconnected" id=9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab namespace=k8s.io Aug 5 22:31:18.285877 containerd[1448]: time="2024-08-05T22:31:18.285875986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:31:18.296138 containerd[1448]: time="2024-08-05T22:31:18.296083785Z" level=info msg="CreateContainer within sandbox \"930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\"" Aug 5 22:31:18.297433 containerd[1448]: time="2024-08-05T22:31:18.297404413Z" level=info msg="StartContainer for \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\"" Aug 5 22:31:18.334755 systemd[1]: Started cri-containerd-d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404.scope - libcontainer container d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404. Aug 5 22:31:18.366111 containerd[1448]: time="2024-08-05T22:31:18.365993874Z" level=info msg="StartContainer for \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\" returns successfully" Aug 5 22:31:19.220316 kubelet[2551]: E0805 22:31:19.220276 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:19.222344 kubelet[2551]: E0805 22:31:19.222316 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:19.224384 containerd[1448]: time="2024-08-05T22:31:19.224356189Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 5 22:31:19.451654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498992997.mount: Deactivated successfully. Aug 5 22:31:19.574766 containerd[1448]: time="2024-08-05T22:31:19.574637326Z" level=info msg="CreateContainer within sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\"" Aug 5 22:31:19.575219 containerd[1448]: time="2024-08-05T22:31:19.575190498Z" level=info msg="StartContainer for \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\"" Aug 5 22:31:19.615048 kubelet[2551]: I0805 22:31:19.614959 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-2clm7" podStartSLOduration=3.898012165 podStartE2EDuration="21.614937022s" podCreationTimestamp="2024-08-05 22:30:58 +0000 UTC" firstStartedPulling="2024-08-05 22:30:59.575963447 +0000 UTC m=+13.982425188" lastFinishedPulling="2024-08-05 22:31:17.292888305 +0000 UTC m=+31.699350045" observedRunningTime="2024-08-05 22:31:19.349336302 +0000 UTC m=+33.755798052" watchObservedRunningTime="2024-08-05 22:31:19.614937022 +0000 UTC m=+34.021398762" Aug 5 22:31:19.634458 systemd[1]: Started cri-containerd-6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7.scope - libcontainer container 6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7. Aug 5 22:31:19.781945 containerd[1448]: time="2024-08-05T22:31:19.781889014Z" level=info msg="StartContainer for \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\" returns successfully" Aug 5 22:31:19.948077 kubelet[2551]: I0805 22:31:19.947882 2551 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:31:20.029610 kubelet[2551]: I0805 22:31:20.028753 2551 topology_manager.go:215] "Topology Admit Handler" podUID="f3b3d1d4-4c2b-42dd-bf45-3a7bf4897cce" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cnmxb" Aug 5 22:31:20.029610 kubelet[2551]: I0805 22:31:20.028965 2551 topology_manager.go:215] "Topology Admit Handler" podUID="63d645a1-cfd4-47cd-bc50-648fd1707296" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xmqcs" Aug 5 22:31:20.044057 systemd[1]: Created slice kubepods-burstable-podf3b3d1d4_4c2b_42dd_bf45_3a7bf4897cce.slice - libcontainer container kubepods-burstable-podf3b3d1d4_4c2b_42dd_bf45_3a7bf4897cce.slice. Aug 5 22:31:20.053307 systemd[1]: Created slice kubepods-burstable-pod63d645a1_cfd4_47cd_bc50_648fd1707296.slice - libcontainer container kubepods-burstable-pod63d645a1_cfd4_47cd_bc50_648fd1707296.slice. Aug 5 22:31:20.095621 kubelet[2551]: I0805 22:31:20.095558 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7227h\" (UniqueName: \"kubernetes.io/projected/f3b3d1d4-4c2b-42dd-bf45-3a7bf4897cce-kube-api-access-7227h\") pod \"coredns-7db6d8ff4d-cnmxb\" (UID: \"f3b3d1d4-4c2b-42dd-bf45-3a7bf4897cce\") " pod="kube-system/coredns-7db6d8ff4d-cnmxb" Aug 5 22:31:20.095621 kubelet[2551]: I0805 22:31:20.095617 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63d645a1-cfd4-47cd-bc50-648fd1707296-config-volume\") pod \"coredns-7db6d8ff4d-xmqcs\" (UID: \"63d645a1-cfd4-47cd-bc50-648fd1707296\") " pod="kube-system/coredns-7db6d8ff4d-xmqcs" Aug 5 22:31:20.095821 kubelet[2551]: I0805 22:31:20.095646 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4z4l\" (UniqueName: \"kubernetes.io/projected/63d645a1-cfd4-47cd-bc50-648fd1707296-kube-api-access-h4z4l\") pod \"coredns-7db6d8ff4d-xmqcs\" (UID: \"63d645a1-cfd4-47cd-bc50-648fd1707296\") " pod="kube-system/coredns-7db6d8ff4d-xmqcs" Aug 5 22:31:20.095821 kubelet[2551]: I0805 22:31:20.095692 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3b3d1d4-4c2b-42dd-bf45-3a7bf4897cce-config-volume\") pod \"coredns-7db6d8ff4d-cnmxb\" (UID: \"f3b3d1d4-4c2b-42dd-bf45-3a7bf4897cce\") " pod="kube-system/coredns-7db6d8ff4d-cnmxb" Aug 5 22:31:20.227288 kubelet[2551]: E0805 22:31:20.227174 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:20.227688 kubelet[2551]: E0805 22:31:20.227572 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:20.350264 kubelet[2551]: E0805 22:31:20.350197 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:20.357113 kubelet[2551]: E0805 22:31:20.357070 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:20.406720 containerd[1448]: time="2024-08-05T22:31:20.406657835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xmqcs,Uid:63d645a1-cfd4-47cd-bc50-648fd1707296,Namespace:kube-system,Attempt:0,}" Aug 5 22:31:20.407962 containerd[1448]: time="2024-08-05T22:31:20.407918180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnmxb,Uid:f3b3d1d4-4c2b-42dd-bf45-3a7bf4897cce,Namespace:kube-system,Attempt:0,}" Aug 5 22:31:21.228403 kubelet[2551]: E0805 22:31:21.228369 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:21.952396 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:47644.service - OpenSSH per-connection server daemon (10.0.0.1:47644). Aug 5 22:31:21.994845 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 47644 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:21.996478 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:22.001153 systemd-logind[1431]: New session 9 of user core. Aug 5 22:31:22.012593 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:31:22.181623 sshd[3387]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:22.186700 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:47644.service: Deactivated successfully. Aug 5 22:31:22.189338 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:31:22.190981 systemd-logind[1431]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:31:22.192018 systemd-logind[1431]: Removed session 9. Aug 5 22:31:22.206789 systemd-networkd[1384]: cilium_host: Link UP Aug 5 22:31:22.206959 systemd-networkd[1384]: cilium_net: Link UP Aug 5 22:31:22.207169 systemd-networkd[1384]: cilium_net: Gained carrier Aug 5 22:31:22.207384 systemd-networkd[1384]: cilium_host: Gained carrier Aug 5 22:31:22.229939 kubelet[2551]: E0805 22:31:22.229680 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:22.335967 systemd-networkd[1384]: cilium_vxlan: Link UP Aug 5 22:31:22.335979 systemd-networkd[1384]: cilium_vxlan: Gained carrier Aug 5 22:31:22.394676 systemd-networkd[1384]: cilium_host: Gained IPv6LL Aug 5 22:31:22.591493 kernel: NET: Registered PF_ALG protocol family Aug 5 22:31:22.938600 systemd-networkd[1384]: cilium_net: Gained IPv6LL Aug 5 22:31:23.345780 systemd-networkd[1384]: lxc_health: Link UP Aug 5 22:31:23.356774 systemd-networkd[1384]: lxc_health: Gained carrier Aug 5 22:31:23.506458 systemd-networkd[1384]: lxcd1ac68d25a56: Link UP Aug 5 22:31:23.517474 kernel: eth0: renamed from tmp70c9e Aug 5 22:31:23.529191 systemd-networkd[1384]: lxcd1ac68d25a56: Gained carrier Aug 5 22:31:23.542466 kernel: eth0: renamed from tmpee512 Aug 5 22:31:23.550210 systemd-networkd[1384]: lxc308e7699d7c2: Link UP Aug 5 22:31:23.554718 systemd-networkd[1384]: lxc308e7699d7c2: Gained carrier Aug 5 22:31:23.834670 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Aug 5 22:31:24.922640 systemd-networkd[1384]: lxc_health: Gained IPv6LL Aug 5 22:31:25.113828 kubelet[2551]: E0805 22:31:25.113782 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:25.126638 kubelet[2551]: I0805 22:31:25.126086 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xpwhx" podStartSLOduration=16.468987889 podStartE2EDuration="27.126059069s" podCreationTimestamp="2024-08-05 22:30:58 +0000 UTC" firstStartedPulling="2024-08-05 22:30:59.550741184 +0000 UTC m=+13.957202924" lastFinishedPulling="2024-08-05 22:31:10.207812364 +0000 UTC m=+24.614274104" observedRunningTime="2024-08-05 22:31:20.240751426 +0000 UTC m=+34.647213186" watchObservedRunningTime="2024-08-05 22:31:25.126059069 +0000 UTC m=+39.532520809" Aug 5 22:31:25.235944 kubelet[2551]: E0805 22:31:25.235813 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:25.434634 systemd-networkd[1384]: lxcd1ac68d25a56: Gained IPv6LL Aug 5 22:31:25.562639 systemd-networkd[1384]: lxc308e7699d7c2: Gained IPv6LL Aug 5 22:31:26.236642 kubelet[2551]: E0805 22:31:26.236591 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:27.195533 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:47658.service - OpenSSH per-connection server daemon (10.0.0.1:47658). Aug 5 22:31:27.237319 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 47658 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:27.238987 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:27.246688 systemd-logind[1431]: New session 10 of user core. Aug 5 22:31:27.251668 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:31:27.638172 containerd[1448]: time="2024-08-05T22:31:27.638068830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:31:27.638172 containerd[1448]: time="2024-08-05T22:31:27.638128703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:31:27.638802 containerd[1448]: time="2024-08-05T22:31:27.638149151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:31:27.638802 containerd[1448]: time="2024-08-05T22:31:27.638190389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:31:27.647881 containerd[1448]: time="2024-08-05T22:31:27.647771527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:31:27.647881 containerd[1448]: time="2024-08-05T22:31:27.647834887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:31:27.648127 containerd[1448]: time="2024-08-05T22:31:27.647866175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:31:27.648127 containerd[1448]: time="2024-08-05T22:31:27.647886884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:31:27.671500 systemd[1]: run-containerd-runc-k8s.io-70c9e4c091ad22cb5e96dfe42ef975a5ec0f2d02a26ad50882e7c62bf8b044c3-runc.WS4NGB.mount: Deactivated successfully. Aug 5 22:31:27.677532 sshd[3776]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:27.684687 systemd[1]: Started cri-containerd-ee512ef1ddd157cdbe49675f0819ca9ab8000247b19bd58437930e4dc36be54d.scope - libcontainer container ee512ef1ddd157cdbe49675f0819ca9ab8000247b19bd58437930e4dc36be54d. Aug 5 22:31:27.685197 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:47658.service: Deactivated successfully. Aug 5 22:31:27.687227 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:31:27.692525 systemd-logind[1431]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:31:27.693844 systemd-logind[1431]: Removed session 10. Aug 5 22:31:27.701629 systemd[1]: Started cri-containerd-70c9e4c091ad22cb5e96dfe42ef975a5ec0f2d02a26ad50882e7c62bf8b044c3.scope - libcontainer container 70c9e4c091ad22cb5e96dfe42ef975a5ec0f2d02a26ad50882e7c62bf8b044c3. Aug 5 22:31:27.706121 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:31:27.714168 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:31:27.744831 containerd[1448]: time="2024-08-05T22:31:27.744778209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnmxb,Uid:f3b3d1d4-4c2b-42dd-bf45-3a7bf4897cce,Namespace:kube-system,Attempt:0,} returns sandbox id \"70c9e4c091ad22cb5e96dfe42ef975a5ec0f2d02a26ad50882e7c62bf8b044c3\"" Aug 5 22:31:27.745312 containerd[1448]: time="2024-08-05T22:31:27.744910288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xmqcs,Uid:63d645a1-cfd4-47cd-bc50-648fd1707296,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee512ef1ddd157cdbe49675f0819ca9ab8000247b19bd58437930e4dc36be54d\"" Aug 5 22:31:27.745761 kubelet[2551]: E0805 22:31:27.745741 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:27.747369 kubelet[2551]: E0805 22:31:27.746480 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:27.748604 containerd[1448]: time="2024-08-05T22:31:27.748557960Z" level=info msg="CreateContainer within sandbox \"70c9e4c091ad22cb5e96dfe42ef975a5ec0f2d02a26ad50882e7c62bf8b044c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:31:27.760519 containerd[1448]: time="2024-08-05T22:31:27.758940528Z" level=info msg="CreateContainer within sandbox \"ee512ef1ddd157cdbe49675f0819ca9ab8000247b19bd58437930e4dc36be54d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:31:27.781935 containerd[1448]: time="2024-08-05T22:31:27.781882233Z" level=info msg="CreateContainer within sandbox \"70c9e4c091ad22cb5e96dfe42ef975a5ec0f2d02a26ad50882e7c62bf8b044c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07def11414c895a4746bfddb15d8d3515cdbd20e4d668c45bf479d88c57b4058\"" Aug 5 22:31:27.783482 containerd[1448]: time="2024-08-05T22:31:27.782410508Z" level=info msg="StartContainer for \"07def11414c895a4746bfddb15d8d3515cdbd20e4d668c45bf479d88c57b4058\"" Aug 5 22:31:27.805279 containerd[1448]: time="2024-08-05T22:31:27.805243178Z" level=info msg="CreateContainer within sandbox \"ee512ef1ddd157cdbe49675f0819ca9ab8000247b19bd58437930e4dc36be54d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc1e35013a441c37d396fae271d74a98271b381f591fa0292b0a298a9700527b\"" Aug 5 22:31:27.805727 containerd[1448]: time="2024-08-05T22:31:27.805710418Z" level=info msg="StartContainer for \"bc1e35013a441c37d396fae271d74a98271b381f591fa0292b0a298a9700527b\"" Aug 5 22:31:27.809803 systemd[1]: Started cri-containerd-07def11414c895a4746bfddb15d8d3515cdbd20e4d668c45bf479d88c57b4058.scope - libcontainer container 07def11414c895a4746bfddb15d8d3515cdbd20e4d668c45bf479d88c57b4058. Aug 5 22:31:27.840666 systemd[1]: Started cri-containerd-bc1e35013a441c37d396fae271d74a98271b381f591fa0292b0a298a9700527b.scope - libcontainer container bc1e35013a441c37d396fae271d74a98271b381f591fa0292b0a298a9700527b. Aug 5 22:31:27.849825 containerd[1448]: time="2024-08-05T22:31:27.849782758Z" level=info msg="StartContainer for \"07def11414c895a4746bfddb15d8d3515cdbd20e4d668c45bf479d88c57b4058\" returns successfully" Aug 5 22:31:27.874930 containerd[1448]: time="2024-08-05T22:31:27.874855969Z" level=info msg="StartContainer for \"bc1e35013a441c37d396fae271d74a98271b381f591fa0292b0a298a9700527b\" returns successfully" Aug 5 22:31:28.242801 kubelet[2551]: E0805 22:31:28.242757 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:28.248194 kubelet[2551]: E0805 22:31:28.248147 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:28.263861 kubelet[2551]: I0805 22:31:28.263793 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xmqcs" podStartSLOduration=30.263745672 podStartE2EDuration="30.263745672s" podCreationTimestamp="2024-08-05 22:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:31:28.263024766 +0000 UTC m=+42.669486506" watchObservedRunningTime="2024-08-05 22:31:28.263745672 +0000 UTC m=+42.670207413" Aug 5 22:31:29.248641 kubelet[2551]: E0805 22:31:29.248389 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:29.249745 kubelet[2551]: E0805 22:31:29.249715 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:29.258414 kubelet[2551]: I0805 22:31:29.258163 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cnmxb" podStartSLOduration=31.258145416 podStartE2EDuration="31.258145416s" podCreationTimestamp="2024-08-05 22:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:31:28.277941102 +0000 UTC m=+42.684402842" watchObservedRunningTime="2024-08-05 22:31:29.258145416 +0000 UTC m=+43.664607166" Aug 5 22:31:30.250915 kubelet[2551]: E0805 22:31:30.250870 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:30.250915 kubelet[2551]: E0805 22:31:30.250870 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:31:32.690517 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:60032.service - OpenSSH per-connection server daemon (10.0.0.1:60032). Aug 5 22:31:32.733883 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 60032 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:32.735733 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:32.740476 systemd-logind[1431]: New session 11 of user core. Aug 5 22:31:32.750803 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:31:32.973414 sshd[3969]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:32.978144 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:60032.service: Deactivated successfully. Aug 5 22:31:32.981020 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:31:32.982346 systemd-logind[1431]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:31:32.983430 systemd-logind[1431]: Removed session 11. Aug 5 22:31:37.991022 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:60048.service - OpenSSH per-connection server daemon (10.0.0.1:60048). Aug 5 22:31:38.028852 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 60048 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:38.030516 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:38.035372 systemd-logind[1431]: New session 12 of user core. Aug 5 22:31:38.042614 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:31:38.172355 sshd[3985]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:38.176280 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:60048.service: Deactivated successfully. Aug 5 22:31:38.178897 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:31:38.181057 systemd-logind[1431]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:31:38.182290 systemd-logind[1431]: Removed session 12. Aug 5 22:31:43.190996 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:39900.service - OpenSSH per-connection server daemon (10.0.0.1:39900). Aug 5 22:31:43.230158 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 39900 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:43.232118 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:43.236536 systemd-logind[1431]: New session 13 of user core. Aug 5 22:31:43.246768 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:31:43.364754 sshd[4000]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:43.377282 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:39900.service: Deactivated successfully. Aug 5 22:31:43.379098 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:31:43.380721 systemd-logind[1431]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:31:43.382052 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:39912.service - OpenSSH per-connection server daemon (10.0.0.1:39912). Aug 5 22:31:43.382979 systemd-logind[1431]: Removed session 13. Aug 5 22:31:43.416924 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 39912 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:43.418512 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:43.422352 systemd-logind[1431]: New session 14 of user core. Aug 5 22:31:43.437561 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:31:43.758818 sshd[4015]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:43.766348 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:39912.service: Deactivated successfully. Aug 5 22:31:43.768552 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:31:43.770216 systemd-logind[1431]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:31:43.786171 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:39914.service - OpenSSH per-connection server daemon (10.0.0.1:39914). Aug 5 22:31:43.787751 systemd-logind[1431]: Removed session 14. Aug 5 22:31:43.820102 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 39914 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:43.821779 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:43.826477 systemd-logind[1431]: New session 15 of user core. Aug 5 22:31:43.835640 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:31:44.062168 sshd[4027]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:44.066631 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:39914.service: Deactivated successfully. Aug 5 22:31:44.068757 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:31:44.069461 systemd-logind[1431]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:31:44.070339 systemd-logind[1431]: Removed session 15. Aug 5 22:31:49.074961 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:39916.service - OpenSSH per-connection server daemon (10.0.0.1:39916). Aug 5 22:31:49.113465 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 39916 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:49.115817 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:49.120659 systemd-logind[1431]: New session 16 of user core. Aug 5 22:31:49.127586 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:31:49.266014 sshd[4049]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:49.271365 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:39916.service: Deactivated successfully. Aug 5 22:31:49.274157 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:31:49.274903 systemd-logind[1431]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:31:49.275909 systemd-logind[1431]: Removed session 16. Aug 5 22:31:54.278058 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:44540.service - OpenSSH per-connection server daemon (10.0.0.1:44540). Aug 5 22:31:54.313495 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 44540 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:54.315076 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:54.319113 systemd-logind[1431]: New session 17 of user core. Aug 5 22:31:54.326654 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:31:54.432600 sshd[4063]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:54.445799 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:44540.service: Deactivated successfully. Aug 5 22:31:54.448122 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:31:54.449652 systemd-logind[1431]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:31:54.455866 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:44550.service - OpenSSH per-connection server daemon (10.0.0.1:44550). Aug 5 22:31:54.457426 systemd-logind[1431]: Removed session 17. Aug 5 22:31:54.489093 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 44550 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:54.491078 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:54.495464 systemd-logind[1431]: New session 18 of user core. Aug 5 22:31:54.509702 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:31:54.911294 sshd[4077]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:54.919076 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:44550.service: Deactivated successfully. Aug 5 22:31:54.920767 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:31:54.922599 systemd-logind[1431]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:31:54.928869 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:44564.service - OpenSSH per-connection server daemon (10.0.0.1:44564). Aug 5 22:31:54.929989 systemd-logind[1431]: Removed session 18. Aug 5 22:31:54.960574 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 44564 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:54.962122 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:54.966139 systemd-logind[1431]: New session 19 of user core. Aug 5 22:31:54.975563 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:31:56.339431 sshd[4089]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:56.349097 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:44564.service: Deactivated successfully. Aug 5 22:31:56.351590 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:31:56.354297 systemd-logind[1431]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:31:56.361518 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:44578.service - OpenSSH per-connection server daemon (10.0.0.1:44578). Aug 5 22:31:56.362929 systemd-logind[1431]: Removed session 19. Aug 5 22:31:56.398665 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 44578 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:56.400182 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:56.404372 systemd-logind[1431]: New session 20 of user core. Aug 5 22:31:56.411574 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:31:56.677796 sshd[4112]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:56.691314 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:44578.service: Deactivated successfully. Aug 5 22:31:56.693519 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:31:56.702628 systemd-logind[1431]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:31:56.711860 systemd[1]: Started sshd@20-10.0.0.102:22-10.0.0.1:44582.service - OpenSSH per-connection server daemon (10.0.0.1:44582). Aug 5 22:31:56.713115 systemd-logind[1431]: Removed session 20. Aug 5 22:31:56.743426 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 44582 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:31:56.745862 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:31:56.750643 systemd-logind[1431]: New session 21 of user core. Aug 5 22:31:56.764569 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:31:56.877814 sshd[4125]: pam_unix(sshd:session): session closed for user core Aug 5 22:31:56.881669 systemd[1]: sshd@20-10.0.0.102:22-10.0.0.1:44582.service: Deactivated successfully. Aug 5 22:31:56.883969 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:31:56.884756 systemd-logind[1431]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:31:56.885824 systemd-logind[1431]: Removed session 21. Aug 5 22:32:01.898626 systemd[1]: Started sshd@21-10.0.0.102:22-10.0.0.1:40526.service - OpenSSH per-connection server daemon (10.0.0.1:40526). Aug 5 22:32:01.950825 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 40526 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:01.953273 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:01.976171 systemd-logind[1431]: New session 22 of user core. Aug 5 22:32:01.995768 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:32:02.243347 sshd[4142]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:02.251030 systemd[1]: sshd@21-10.0.0.102:22-10.0.0.1:40526.service: Deactivated successfully. Aug 5 22:32:02.253938 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:32:02.257664 systemd-logind[1431]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:32:02.262722 systemd-logind[1431]: Removed session 22. Aug 5 22:32:05.942684 kubelet[2551]: E0805 22:32:05.942600 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:07.299898 systemd[1]: Started sshd@22-10.0.0.102:22-10.0.0.1:40538.service - OpenSSH per-connection server daemon (10.0.0.1:40538). Aug 5 22:32:07.387406 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 40538 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:07.389901 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:07.402610 systemd-logind[1431]: New session 23 of user core. Aug 5 22:32:07.412789 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:32:07.678105 sshd[4158]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:07.687890 systemd[1]: sshd@22-10.0.0.102:22-10.0.0.1:40538.service: Deactivated successfully. Aug 5 22:32:07.691740 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:32:07.695032 systemd-logind[1431]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:32:07.696898 systemd-logind[1431]: Removed session 23. Aug 5 22:32:11.940311 kubelet[2551]: E0805 22:32:11.940197 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:12.689873 systemd[1]: Started sshd@23-10.0.0.102:22-10.0.0.1:37468.service - OpenSSH per-connection server daemon (10.0.0.1:37468). Aug 5 22:32:12.731973 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 37468 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:12.733917 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:12.738713 systemd-logind[1431]: New session 24 of user core. Aug 5 22:32:12.753735 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:32:12.872141 sshd[4175]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:12.876738 systemd[1]: sshd@23-10.0.0.102:22-10.0.0.1:37468.service: Deactivated successfully. Aug 5 22:32:12.879492 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:32:12.881805 systemd-logind[1431]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:32:12.883023 systemd-logind[1431]: Removed session 24. Aug 5 22:32:17.882539 systemd[1]: Started sshd@24-10.0.0.102:22-10.0.0.1:37480.service - OpenSSH per-connection server daemon (10.0.0.1:37480). Aug 5 22:32:17.918009 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 37480 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:17.919604 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:17.923433 systemd-logind[1431]: New session 25 of user core. Aug 5 22:32:17.933576 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:32:17.939929 kubelet[2551]: E0805 22:32:17.939902 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:17.940265 kubelet[2551]: E0805 22:32:17.940142 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:18.050126 sshd[4189]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:18.054295 systemd[1]: sshd@24-10.0.0.102:22-10.0.0.1:37480.service: Deactivated successfully. Aug 5 22:32:18.056582 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:32:18.057149 systemd-logind[1431]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:32:18.057941 systemd-logind[1431]: Removed session 25. Aug 5 22:32:23.063262 systemd[1]: Started sshd@25-10.0.0.102:22-10.0.0.1:42232.service - OpenSSH per-connection server daemon (10.0.0.1:42232). Aug 5 22:32:23.104253 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 42232 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:23.105925 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:23.110393 systemd-logind[1431]: New session 26 of user core. Aug 5 22:32:23.119614 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:32:23.230237 sshd[4203]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:23.248700 systemd[1]: sshd@25-10.0.0.102:22-10.0.0.1:42232.service: Deactivated successfully. Aug 5 22:32:23.250722 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:32:23.252425 systemd-logind[1431]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:32:23.260789 systemd[1]: Started sshd@26-10.0.0.102:22-10.0.0.1:42234.service - OpenSSH per-connection server daemon (10.0.0.1:42234). Aug 5 22:32:23.261722 systemd-logind[1431]: Removed session 26. Aug 5 22:32:23.292944 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 42234 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:23.294752 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:23.299155 systemd-logind[1431]: New session 27 of user core. Aug 5 22:32:23.308575 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 22:32:25.040089 containerd[1448]: time="2024-08-05T22:32:25.039989546Z" level=info msg="StopContainer for \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\" with timeout 30 (s)" Aug 5 22:32:25.055764 containerd[1448]: time="2024-08-05T22:32:25.055716475Z" level=info msg="Stop container \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\" with signal terminated" Aug 5 22:32:25.077095 systemd[1]: cri-containerd-d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404.scope: Deactivated successfully. Aug 5 22:32:25.097520 containerd[1448]: time="2024-08-05T22:32:25.097417341Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:32:25.104798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404-rootfs.mount: Deactivated successfully. Aug 5 22:32:25.108208 containerd[1448]: time="2024-08-05T22:32:25.108174497Z" level=info msg="StopContainer for \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\" with timeout 2 (s)" Aug 5 22:32:25.108468 containerd[1448]: time="2024-08-05T22:32:25.108427458Z" level=info msg="Stop container \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\" with signal terminated" Aug 5 22:32:25.114496 containerd[1448]: time="2024-08-05T22:32:25.114296094Z" level=info msg="shim disconnected" id=d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404 namespace=k8s.io Aug 5 22:32:25.114496 containerd[1448]: time="2024-08-05T22:32:25.114380014Z" level=warning msg="cleaning up after shim disconnected" id=d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404 namespace=k8s.io Aug 5 22:32:25.114496 containerd[1448]: time="2024-08-05T22:32:25.114393599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:32:25.116050 systemd-networkd[1384]: lxc_health: Link DOWN Aug 5 22:32:25.116493 systemd-networkd[1384]: lxc_health: Lost carrier Aug 5 22:32:25.136058 containerd[1448]: time="2024-08-05T22:32:25.135998754Z" level=info msg="StopContainer for \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\" returns successfully" Aug 5 22:32:25.142913 containerd[1448]: time="2024-08-05T22:32:25.142684350Z" level=info msg="StopPodSandbox for \"930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2\"" Aug 5 22:32:25.142913 containerd[1448]: time="2024-08-05T22:32:25.142758210Z" level=info msg="Container to stop \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:32:25.145311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2-shm.mount: Deactivated successfully. Aug 5 22:32:25.147257 systemd[1]: cri-containerd-6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7.scope: Deactivated successfully. Aug 5 22:32:25.147609 systemd[1]: cri-containerd-6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7.scope: Consumed 7.812s CPU time. Aug 5 22:32:25.161339 systemd[1]: cri-containerd-930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2.scope: Deactivated successfully. Aug 5 22:32:25.173374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7-rootfs.mount: Deactivated successfully. Aug 5 22:32:25.180699 containerd[1448]: time="2024-08-05T22:32:25.180616441Z" level=info msg="shim disconnected" id=6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7 namespace=k8s.io Aug 5 22:32:25.180699 containerd[1448]: time="2024-08-05T22:32:25.180689951Z" level=warning msg="cleaning up after shim disconnected" id=6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7 namespace=k8s.io Aug 5 22:32:25.180699 containerd[1448]: time="2024-08-05T22:32:25.180698637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:32:25.191957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2-rootfs.mount: Deactivated successfully. Aug 5 22:32:25.245900 containerd[1448]: time="2024-08-05T22:32:25.245504722Z" level=info msg="shim disconnected" id=930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2 namespace=k8s.io Aug 5 22:32:25.245900 containerd[1448]: time="2024-08-05T22:32:25.245605112Z" level=warning msg="cleaning up after shim disconnected" id=930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2 namespace=k8s.io Aug 5 22:32:25.245900 containerd[1448]: time="2024-08-05T22:32:25.245635070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:32:25.260186 containerd[1448]: time="2024-08-05T22:32:25.260051745Z" level=info msg="StopContainer for \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\" returns successfully" Aug 5 22:32:25.260673 containerd[1448]: time="2024-08-05T22:32:25.260654236Z" level=info msg="StopPodSandbox for \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\"" Aug 5 22:32:25.261039 containerd[1448]: time="2024-08-05T22:32:25.260852352Z" level=info msg="Container to stop \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:32:25.261039 containerd[1448]: time="2024-08-05T22:32:25.260898339Z" level=info msg="Container to stop \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:32:25.261039 containerd[1448]: time="2024-08-05T22:32:25.260908809Z" level=info msg="Container to stop \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:32:25.261039 containerd[1448]: time="2024-08-05T22:32:25.260919740Z" level=info msg="Container to stop \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:32:25.261039 containerd[1448]: time="2024-08-05T22:32:25.260930260Z" level=info msg="Container to stop \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:32:25.263047 containerd[1448]: time="2024-08-05T22:32:25.263026244Z" level=info msg="TearDown network for sandbox \"930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2\" successfully" Aug 5 22:32:25.263047 containerd[1448]: time="2024-08-05T22:32:25.263045992Z" level=info msg="StopPodSandbox for \"930485e69a691c858f51c9bf26e0e75c59c3d8f0a9cf27070275c7bd0d2f8bf2\" returns successfully" Aug 5 22:32:25.270063 systemd[1]: cri-containerd-036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd.scope: Deactivated successfully. Aug 5 22:32:25.310901 containerd[1448]: time="2024-08-05T22:32:25.310724151Z" level=info msg="shim disconnected" id=036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd namespace=k8s.io Aug 5 22:32:25.310901 containerd[1448]: time="2024-08-05T22:32:25.310808892Z" level=warning msg="cleaning up after shim disconnected" id=036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd namespace=k8s.io Aug 5 22:32:25.310901 containerd[1448]: time="2024-08-05T22:32:25.310820684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:32:25.333349 containerd[1448]: time="2024-08-05T22:32:25.333273155Z" level=info msg="TearDown network for sandbox \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" successfully" Aug 5 22:32:25.333349 containerd[1448]: time="2024-08-05T22:32:25.333328431Z" level=info msg="StopPodSandbox for \"036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd\" returns successfully" Aug 5 22:32:25.385565 kubelet[2551]: I0805 22:32:25.385463 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-xtables-lock\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.385565 kubelet[2551]: I0805 22:32:25.385542 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cni-path\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387342 kubelet[2551]: I0805 22:32:25.385577 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-host-proc-sys-kernel\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387342 kubelet[2551]: I0805 22:32:25.385622 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-cgroup\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387342 kubelet[2551]: I0805 22:32:25.385642 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-hostproc\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387342 kubelet[2551]: I0805 22:32:25.385670 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mjsp\" (UniqueName: \"kubernetes.io/projected/6927b3e0-554a-44a8-a3bf-e0b49da768de-kube-api-access-2mjsp\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387342 kubelet[2551]: I0805 22:32:25.385570 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.387594 kubelet[2551]: I0805 22:32:25.385653 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.387594 kubelet[2551]: I0805 22:32:25.385681 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.387594 kubelet[2551]: I0805 22:32:25.385691 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6927b3e0-554a-44a8-a3bf-e0b49da768de-hubble-tls\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387594 kubelet[2551]: I0805 22:32:25.385757 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cni-path" (OuterVolumeSpecName: "cni-path") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.387594 kubelet[2551]: I0805 22:32:25.385804 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-hostproc" (OuterVolumeSpecName: "hostproc") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.387749 kubelet[2551]: I0805 22:32:25.385804 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b41f58c-af9d-4366-a224-8d9da879b256-cilium-config-path\") pod \"9b41f58c-af9d-4366-a224-8d9da879b256\" (UID: \"9b41f58c-af9d-4366-a224-8d9da879b256\") " Aug 5 22:32:25.387749 kubelet[2551]: I0805 22:32:25.385839 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-run\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387749 kubelet[2551]: I0805 22:32:25.385859 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-etc-cni-netd\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387749 kubelet[2551]: I0805 22:32:25.385884 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-config-path\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387749 kubelet[2551]: I0805 22:32:25.385908 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-bpf-maps\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387749 kubelet[2551]: I0805 22:32:25.385929 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6927b3e0-554a-44a8-a3bf-e0b49da768de-clustermesh-secrets\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387990 kubelet[2551]: I0805 22:32:25.385943 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-host-proc-sys-net\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387990 kubelet[2551]: I0805 22:32:25.385962 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf9kq\" (UniqueName: \"kubernetes.io/projected/9b41f58c-af9d-4366-a224-8d9da879b256-kube-api-access-gf9kq\") pod \"9b41f58c-af9d-4366-a224-8d9da879b256\" (UID: \"9b41f58c-af9d-4366-a224-8d9da879b256\") " Aug 5 22:32:25.387990 kubelet[2551]: I0805 22:32:25.385984 2551 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-lib-modules\") pod \"6927b3e0-554a-44a8-a3bf-e0b49da768de\" (UID: \"6927b3e0-554a-44a8-a3bf-e0b49da768de\") " Aug 5 22:32:25.387990 kubelet[2551]: I0805 22:32:25.386020 2551 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.387990 kubelet[2551]: I0805 22:32:25.386029 2551 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.387990 kubelet[2551]: I0805 22:32:25.386041 2551 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.387990 kubelet[2551]: I0805 22:32:25.386050 2551 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.388168 kubelet[2551]: I0805 22:32:25.386059 2551 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.388168 kubelet[2551]: I0805 22:32:25.386078 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.388168 kubelet[2551]: I0805 22:32:25.386094 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.388168 kubelet[2551]: I0805 22:32:25.386109 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.388168 kubelet[2551]: I0805 22:32:25.387122 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.390777 kubelet[2551]: I0805 22:32:25.390572 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:32:25.390777 kubelet[2551]: I0805 22:32:25.390674 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6927b3e0-554a-44a8-a3bf-e0b49da768de-kube-api-access-2mjsp" (OuterVolumeSpecName: "kube-api-access-2mjsp") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "kube-api-access-2mjsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:32:25.390777 kubelet[2551]: I0805 22:32:25.390744 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6927b3e0-554a-44a8-a3bf-e0b49da768de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:32:25.392456 kubelet[2551]: I0805 22:32:25.391622 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b41f58c-af9d-4366-a224-8d9da879b256-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b41f58c-af9d-4366-a224-8d9da879b256" (UID: "9b41f58c-af9d-4366-a224-8d9da879b256"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:32:25.392456 kubelet[2551]: I0805 22:32:25.392126 2551 scope.go:117] "RemoveContainer" containerID="d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404" Aug 5 22:32:25.393740 kubelet[2551]: I0805 22:32:25.393677 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6927b3e0-554a-44a8-a3bf-e0b49da768de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:32:25.393898 kubelet[2551]: I0805 22:32:25.393875 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6927b3e0-554a-44a8-a3bf-e0b49da768de" (UID: "6927b3e0-554a-44a8-a3bf-e0b49da768de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:32:25.395428 kubelet[2551]: I0805 22:32:25.395394 2551 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b41f58c-af9d-4366-a224-8d9da879b256-kube-api-access-gf9kq" (OuterVolumeSpecName: "kube-api-access-gf9kq") pod "9b41f58c-af9d-4366-a224-8d9da879b256" (UID: "9b41f58c-af9d-4366-a224-8d9da879b256"). InnerVolumeSpecName "kube-api-access-gf9kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:32:25.395950 containerd[1448]: time="2024-08-05T22:32:25.395902474Z" level=info msg="RemoveContainer for \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\"" Aug 5 22:32:25.400112 containerd[1448]: time="2024-08-05T22:32:25.399998620Z" level=info msg="RemoveContainer for \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\" returns successfully" Aug 5 22:32:25.400658 kubelet[2551]: I0805 22:32:25.400622 2551 scope.go:117] "RemoveContainer" containerID="d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404" Aug 5 22:32:25.400942 containerd[1448]: time="2024-08-05T22:32:25.400860915Z" level=error msg="ContainerStatus for \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\": not found" Aug 5 22:32:25.403690 systemd[1]: Removed slice kubepods-burstable-pod6927b3e0_554a_44a8_a3bf_e0b49da768de.slice - libcontainer container kubepods-burstable-pod6927b3e0_554a_44a8_a3bf_e0b49da768de.slice. Aug 5 22:32:25.403828 systemd[1]: kubepods-burstable-pod6927b3e0_554a_44a8_a3bf_e0b49da768de.slice: Consumed 7.922s CPU time. Aug 5 22:32:25.408590 kubelet[2551]: E0805 22:32:25.408540 2551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\": not found" containerID="d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404" Aug 5 22:32:25.408761 kubelet[2551]: I0805 22:32:25.408590 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404"} err="failed to get container status \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\": rpc error: code = NotFound desc = an error occurred when try to find container \"d441f9c3ee15d58006ead34089acbb9d2920987a86075de2f5b7bd23ff15b404\": not found" Aug 5 22:32:25.408761 kubelet[2551]: I0805 22:32:25.408727 2551 scope.go:117] "RemoveContainer" containerID="6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7" Aug 5 22:32:25.409975 containerd[1448]: time="2024-08-05T22:32:25.409931503Z" level=info msg="RemoveContainer for \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\"" Aug 5 22:32:25.414327 containerd[1448]: time="2024-08-05T22:32:25.414274117Z" level=info msg="RemoveContainer for \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\" returns successfully" Aug 5 22:32:25.414628 kubelet[2551]: I0805 22:32:25.414594 2551 scope.go:117] "RemoveContainer" containerID="9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab" Aug 5 22:32:25.415855 containerd[1448]: time="2024-08-05T22:32:25.415803406Z" level=info msg="RemoveContainer for \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\"" Aug 5 22:32:25.422147 containerd[1448]: time="2024-08-05T22:32:25.422091218Z" level=info msg="RemoveContainer for \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\" returns successfully" Aug 5 22:32:25.422404 kubelet[2551]: I0805 22:32:25.422366 2551 scope.go:117] "RemoveContainer" containerID="46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac" Aug 5 22:32:25.433088 containerd[1448]: time="2024-08-05T22:32:25.433044267Z" level=info msg="RemoveContainer for \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\"" Aug 5 22:32:25.437839 containerd[1448]: time="2024-08-05T22:32:25.437773694Z" level=info msg="RemoveContainer for \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\" returns successfully" Aug 5 22:32:25.438105 kubelet[2551]: I0805 22:32:25.438064 2551 scope.go:117] "RemoveContainer" containerID="1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6" Aug 5 22:32:25.439564 containerd[1448]: time="2024-08-05T22:32:25.439530534Z" level=info msg="RemoveContainer for \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\"" Aug 5 22:32:25.443573 containerd[1448]: time="2024-08-05T22:32:25.443529306Z" level=info msg="RemoveContainer for \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\" returns successfully" Aug 5 22:32:25.443786 kubelet[2551]: I0805 22:32:25.443752 2551 scope.go:117] "RemoveContainer" containerID="81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af" Aug 5 22:32:25.444965 containerd[1448]: time="2024-08-05T22:32:25.444929712Z" level=info msg="RemoveContainer for \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\"" Aug 5 22:32:25.448880 containerd[1448]: time="2024-08-05T22:32:25.448826550Z" level=info msg="RemoveContainer for \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\" returns successfully" Aug 5 22:32:25.449063 kubelet[2551]: I0805 22:32:25.449040 2551 scope.go:117] "RemoveContainer" containerID="6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7" Aug 5 22:32:25.449270 containerd[1448]: time="2024-08-05T22:32:25.449225466Z" level=error msg="ContainerStatus for \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\": not found" Aug 5 22:32:25.449411 kubelet[2551]: E0805 22:32:25.449382 2551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\": not found" containerID="6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7" Aug 5 22:32:25.449489 kubelet[2551]: I0805 22:32:25.449412 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7"} err="failed to get container status \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d2426c03810d5a097db39c8516333cd72d1df52d35aac67983cb68721d2e8f7\": not found" Aug 5 22:32:25.449489 kubelet[2551]: I0805 22:32:25.449458 2551 scope.go:117] "RemoveContainer" containerID="9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab" Aug 5 22:32:25.449655 containerd[1448]: time="2024-08-05T22:32:25.449624262Z" level=error msg="ContainerStatus for \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\": not found" Aug 5 22:32:25.449760 kubelet[2551]: E0805 22:32:25.449739 2551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\": not found" containerID="9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab" Aug 5 22:32:25.449796 kubelet[2551]: I0805 22:32:25.449765 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab"} err="failed to get container status \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"9929b5ceba829bd9dad0dbb5e2e4517eb9540ddf15cb00a862311f98bec963ab\": not found" Aug 5 22:32:25.449796 kubelet[2551]: I0805 22:32:25.449781 2551 scope.go:117] "RemoveContainer" containerID="46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac" Aug 5 22:32:25.449992 containerd[1448]: time="2024-08-05T22:32:25.449954238Z" level=error msg="ContainerStatus for \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\": not found" Aug 5 22:32:25.450118 kubelet[2551]: E0805 22:32:25.450078 2551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\": not found" containerID="46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac" Aug 5 22:32:25.450118 kubelet[2551]: I0805 22:32:25.450098 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac"} err="failed to get container status \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"46eb2b7e5d3a600a784b002288af7855fa7f6b623ead6d01678ef2dedd2216ac\": not found" Aug 5 22:32:25.450118 kubelet[2551]: I0805 22:32:25.450111 2551 scope.go:117] "RemoveContainer" containerID="1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6" Aug 5 22:32:25.450347 containerd[1448]: time="2024-08-05T22:32:25.450254958Z" level=error msg="ContainerStatus for \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\": not found" Aug 5 22:32:25.450394 kubelet[2551]: E0805 22:32:25.450374 2551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\": not found" containerID="1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6" Aug 5 22:32:25.450454 kubelet[2551]: I0805 22:32:25.450415 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6"} err="failed to get container status \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\": rpc error: code = NotFound desc = an error occurred when try to find container \"1cd0862494ad5502d25a1bd10410c71ff6adf72e474ff388d2e9527d49984bd6\": not found" Aug 5 22:32:25.450454 kubelet[2551]: I0805 22:32:25.450432 2551 scope.go:117] "RemoveContainer" containerID="81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af" Aug 5 22:32:25.450609 containerd[1448]: time="2024-08-05T22:32:25.450581788Z" level=error msg="ContainerStatus for \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\": not found" Aug 5 22:32:25.450688 kubelet[2551]: E0805 22:32:25.450664 2551 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\": not found" containerID="81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af" Aug 5 22:32:25.450737 kubelet[2551]: I0805 22:32:25.450688 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af"} err="failed to get container status \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\": rpc error: code = NotFound desc = an error occurred when try to find container \"81fe8f4d9f38a4be82735b8053ee8628ebc45b9bd5bd5a180c4722ce594998af\": not found" Aug 5 22:32:25.487113 kubelet[2551]: I0805 22:32:25.487057 2551 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2mjsp\" (UniqueName: \"kubernetes.io/projected/6927b3e0-554a-44a8-a3bf-e0b49da768de-kube-api-access-2mjsp\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487113 kubelet[2551]: I0805 22:32:25.487094 2551 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487113 kubelet[2551]: I0805 22:32:25.487106 2551 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6927b3e0-554a-44a8-a3bf-e0b49da768de-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487113 kubelet[2551]: I0805 22:32:25.487122 2551 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b41f58c-af9d-4366-a224-8d9da879b256-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487113 kubelet[2551]: I0805 22:32:25.487136 2551 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487471 kubelet[2551]: I0805 22:32:25.487147 2551 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6927b3e0-554a-44a8-a3bf-e0b49da768de-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487471 kubelet[2551]: I0805 22:32:25.487156 2551 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487471 kubelet[2551]: I0805 22:32:25.487167 2551 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6927b3e0-554a-44a8-a3bf-e0b49da768de-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487471 kubelet[2551]: I0805 22:32:25.487174 2551 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487471 kubelet[2551]: I0805 22:32:25.487182 2551 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gf9kq\" (UniqueName: \"kubernetes.io/projected/9b41f58c-af9d-4366-a224-8d9da879b256-kube-api-access-gf9kq\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.487471 kubelet[2551]: I0805 22:32:25.487191 2551 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6927b3e0-554a-44a8-a3bf-e0b49da768de-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 5 22:32:25.698519 systemd[1]: Removed slice kubepods-besteffort-pod9b41f58c_af9d_4366_a224_8d9da879b256.slice - libcontainer container kubepods-besteffort-pod9b41f58c_af9d_4366_a224_8d9da879b256.slice. Aug 5 22:32:25.943083 kubelet[2551]: I0805 22:32:25.943025 2551 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6927b3e0-554a-44a8-a3bf-e0b49da768de" path="/var/lib/kubelet/pods/6927b3e0-554a-44a8-a3bf-e0b49da768de/volumes" Aug 5 22:32:25.943995 kubelet[2551]: I0805 22:32:25.943963 2551 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b41f58c-af9d-4366-a224-8d9da879b256" path="/var/lib/kubelet/pods/9b41f58c-af9d-4366-a224-8d9da879b256/volumes" Aug 5 22:32:26.046392 kubelet[2551]: E0805 22:32:26.046216 2551 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 5 22:32:26.074405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd-rootfs.mount: Deactivated successfully. Aug 5 22:32:26.074601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-036bf759f345d109c5ea7c3f94bcbcdcb2cbe6cdacead45df90a985d543bc3dd-shm.mount: Deactivated successfully. Aug 5 22:32:26.074715 systemd[1]: var-lib-kubelet-pods-9b41f58c\x2daf9d\x2d4366\x2da224\x2d8d9da879b256-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgf9kq.mount: Deactivated successfully. Aug 5 22:32:26.074813 systemd[1]: var-lib-kubelet-pods-6927b3e0\x2d554a\x2d44a8\x2da3bf\x2de0b49da768de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mjsp.mount: Deactivated successfully. Aug 5 22:32:26.074932 systemd[1]: var-lib-kubelet-pods-6927b3e0\x2d554a\x2d44a8\x2da3bf\x2de0b49da768de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 5 22:32:26.075041 systemd[1]: var-lib-kubelet-pods-6927b3e0\x2d554a\x2d44a8\x2da3bf\x2de0b49da768de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 5 22:32:27.003500 sshd[4217]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:27.016223 systemd[1]: sshd@26-10.0.0.102:22-10.0.0.1:42234.service: Deactivated successfully. Aug 5 22:32:27.018621 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 22:32:27.019880 systemd-logind[1431]: Session 27 logged out. Waiting for processes to exit. Aug 5 22:32:27.026959 systemd[1]: Started sshd@27-10.0.0.102:22-10.0.0.1:42238.service - OpenSSH per-connection server daemon (10.0.0.1:42238). Aug 5 22:32:27.029258 systemd-logind[1431]: Removed session 27. Aug 5 22:32:27.067929 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 42238 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:27.070015 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:27.075251 systemd-logind[1431]: New session 28 of user core. Aug 5 22:32:27.088716 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 22:32:27.599600 sshd[4378]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:27.612491 kubelet[2551]: I0805 22:32:27.612238 2551 topology_manager.go:215] "Topology Admit Handler" podUID="e0db96aa-bc78-4748-b1f6-53f78a23c757" podNamespace="kube-system" podName="cilium-mq6mb" Aug 5 22:32:27.612491 kubelet[2551]: E0805 22:32:27.612319 2551 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6927b3e0-554a-44a8-a3bf-e0b49da768de" containerName="mount-bpf-fs" Aug 5 22:32:27.612491 kubelet[2551]: E0805 22:32:27.612329 2551 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6927b3e0-554a-44a8-a3bf-e0b49da768de" containerName="cilium-agent" Aug 5 22:32:27.612491 kubelet[2551]: E0805 22:32:27.612336 2551 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6927b3e0-554a-44a8-a3bf-e0b49da768de" containerName="mount-cgroup" Aug 5 22:32:27.612491 kubelet[2551]: E0805 22:32:27.612343 2551 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6927b3e0-554a-44a8-a3bf-e0b49da768de" containerName="apply-sysctl-overwrites" Aug 5 22:32:27.612491 kubelet[2551]: E0805 22:32:27.612349 2551 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6927b3e0-554a-44a8-a3bf-e0b49da768de" containerName="clean-cilium-state" Aug 5 22:32:27.612491 kubelet[2551]: E0805 22:32:27.612355 2551 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b41f58c-af9d-4366-a224-8d9da879b256" containerName="cilium-operator" Aug 5 22:32:27.612491 kubelet[2551]: I0805 22:32:27.612377 2551 memory_manager.go:354] "RemoveStaleState removing state" podUID="6927b3e0-554a-44a8-a3bf-e0b49da768de" containerName="cilium-agent" Aug 5 22:32:27.612491 kubelet[2551]: I0805 22:32:27.612384 2551 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b41f58c-af9d-4366-a224-8d9da879b256" containerName="cilium-operator" Aug 5 22:32:27.615733 systemd[1]: sshd@27-10.0.0.102:22-10.0.0.1:42238.service: Deactivated successfully. Aug 5 22:32:27.621075 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 22:32:27.625553 systemd-logind[1431]: Session 28 logged out. Waiting for processes to exit. Aug 5 22:32:27.636601 systemd[1]: Started sshd@28-10.0.0.102:22-10.0.0.1:42252.service - OpenSSH per-connection server daemon (10.0.0.1:42252). Aug 5 22:32:27.637961 systemd-logind[1431]: Removed session 28. Aug 5 22:32:27.648792 systemd[1]: Created slice kubepods-burstable-pode0db96aa_bc78_4748_b1f6_53f78a23c757.slice - libcontainer container kubepods-burstable-pode0db96aa_bc78_4748_b1f6_53f78a23c757.slice. Aug 5 22:32:27.678633 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 42252 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:27.680846 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:27.685718 systemd-logind[1431]: New session 29 of user core. Aug 5 22:32:27.699741 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 5 22:32:27.702860 kubelet[2551]: I0805 22:32:27.702818 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-lib-modules\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.702970 kubelet[2551]: I0805 22:32:27.702890 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-host-proc-sys-net\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.702970 kubelet[2551]: I0805 22:32:27.702914 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0db96aa-bc78-4748-b1f6-53f78a23c757-clustermesh-secrets\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.702970 kubelet[2551]: I0805 22:32:27.702936 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-host-proc-sys-kernel\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.702970 kubelet[2551]: I0805 22:32:27.702956 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-cilium-cgroup\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703189 kubelet[2551]: I0805 22:32:27.702977 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-etc-cni-netd\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703189 kubelet[2551]: I0805 22:32:27.702999 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn5fc\" (UniqueName: \"kubernetes.io/projected/e0db96aa-bc78-4748-b1f6-53f78a23c757-kube-api-access-hn5fc\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703189 kubelet[2551]: I0805 22:32:27.703019 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0db96aa-bc78-4748-b1f6-53f78a23c757-hubble-tls\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703189 kubelet[2551]: I0805 22:32:27.703039 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0db96aa-bc78-4748-b1f6-53f78a23c757-cilium-config-path\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703189 kubelet[2551]: I0805 22:32:27.703104 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e0db96aa-bc78-4748-b1f6-53f78a23c757-cilium-ipsec-secrets\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703189 kubelet[2551]: I0805 22:32:27.703126 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-cilium-run\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703376 kubelet[2551]: I0805 22:32:27.703160 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-cni-path\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703376 kubelet[2551]: I0805 22:32:27.703183 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-hostproc\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703376 kubelet[2551]: I0805 22:32:27.703204 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-bpf-maps\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.703376 kubelet[2551]: I0805 22:32:27.703223 2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0db96aa-bc78-4748-b1f6-53f78a23c757-xtables-lock\") pod \"cilium-mq6mb\" (UID: \"e0db96aa-bc78-4748-b1f6-53f78a23c757\") " pod="kube-system/cilium-mq6mb" Aug 5 22:32:27.754218 sshd[4392]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:27.762502 systemd[1]: sshd@28-10.0.0.102:22-10.0.0.1:42252.service: Deactivated successfully. Aug 5 22:32:27.764459 systemd[1]: session-29.scope: Deactivated successfully. Aug 5 22:32:27.766263 systemd-logind[1431]: Session 29 logged out. Waiting for processes to exit. Aug 5 22:32:27.775761 systemd[1]: Started sshd@29-10.0.0.102:22-10.0.0.1:42254.service - OpenSSH per-connection server daemon (10.0.0.1:42254). Aug 5 22:32:27.778019 systemd-logind[1431]: Removed session 29. Aug 5 22:32:27.810128 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 42254 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:32:27.813510 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:32:27.832401 systemd-logind[1431]: New session 30 of user core. Aug 5 22:32:27.836610 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 5 22:32:27.954840 kubelet[2551]: E0805 22:32:27.954658 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:27.955467 containerd[1448]: time="2024-08-05T22:32:27.955273872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mq6mb,Uid:e0db96aa-bc78-4748-b1f6-53f78a23c757,Namespace:kube-system,Attempt:0,}" Aug 5 22:32:27.991895 containerd[1448]: time="2024-08-05T22:32:27.990825412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:32:27.991895 containerd[1448]: time="2024-08-05T22:32:27.991844994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:32:27.991895 containerd[1448]: time="2024-08-05T22:32:27.991867087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:32:27.992109 containerd[1448]: time="2024-08-05T22:32:27.991879610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:32:28.011685 systemd[1]: Started cri-containerd-035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54.scope - libcontainer container 035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54. Aug 5 22:32:28.044679 containerd[1448]: time="2024-08-05T22:32:28.043340637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mq6mb,Uid:e0db96aa-bc78-4748-b1f6-53f78a23c757,Namespace:kube-system,Attempt:0,} returns sandbox id \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\"" Aug 5 22:32:28.045341 kubelet[2551]: E0805 22:32:28.045286 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:28.048145 containerd[1448]: time="2024-08-05T22:32:28.047991291Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 5 22:32:28.069363 containerd[1448]: time="2024-08-05T22:32:28.069269470Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ea5079df4710d9bd9b4fc55197de6a8de10a4ca4cbcc6d1910d8a8f966162ee\"" Aug 5 22:32:28.069990 containerd[1448]: time="2024-08-05T22:32:28.069953416Z" level=info msg="StartContainer for \"0ea5079df4710d9bd9b4fc55197de6a8de10a4ca4cbcc6d1910d8a8f966162ee\"" Aug 5 22:32:28.110735 systemd[1]: Started cri-containerd-0ea5079df4710d9bd9b4fc55197de6a8de10a4ca4cbcc6d1910d8a8f966162ee.scope - libcontainer container 0ea5079df4710d9bd9b4fc55197de6a8de10a4ca4cbcc6d1910d8a8f966162ee. Aug 5 22:32:28.186392 containerd[1448]: time="2024-08-05T22:32:28.186334594Z" level=info msg="StartContainer for \"0ea5079df4710d9bd9b4fc55197de6a8de10a4ca4cbcc6d1910d8a8f966162ee\" returns successfully" Aug 5 22:32:28.187634 systemd[1]: cri-containerd-0ea5079df4710d9bd9b4fc55197de6a8de10a4ca4cbcc6d1910d8a8f966162ee.scope: Deactivated successfully. Aug 5 22:32:28.324058 kubelet[2551]: I0805 22:32:28.323882 2551 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-08-05T22:32:28Z","lastTransitionTime":"2024-08-05T22:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 5 22:32:28.407059 kubelet[2551]: E0805 22:32:28.407004 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:28.453772 containerd[1448]: time="2024-08-05T22:32:28.453705034Z" level=info msg="shim disconnected" id=0ea5079df4710d9bd9b4fc55197de6a8de10a4ca4cbcc6d1910d8a8f966162ee namespace=k8s.io Aug 5 22:32:28.453772 containerd[1448]: time="2024-08-05T22:32:28.453763725Z" level=warning msg="cleaning up after shim disconnected" id=0ea5079df4710d9bd9b4fc55197de6a8de10a4ca4cbcc6d1910d8a8f966162ee namespace=k8s.io Aug 5 22:32:28.453772 containerd[1448]: time="2024-08-05T22:32:28.453773524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:32:29.411219 kubelet[2551]: E0805 22:32:29.410307 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:29.413344 containerd[1448]: time="2024-08-05T22:32:29.413280972Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 5 22:32:29.435025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053782584.mount: Deactivated successfully. Aug 5 22:32:29.440460 containerd[1448]: time="2024-08-05T22:32:29.440385256Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94\"" Aug 5 22:32:29.441064 containerd[1448]: time="2024-08-05T22:32:29.441027443Z" level=info msg="StartContainer for \"4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94\"" Aug 5 22:32:29.481608 systemd[1]: Started cri-containerd-4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94.scope - libcontainer container 4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94. Aug 5 22:32:29.517955 containerd[1448]: time="2024-08-05T22:32:29.517899292Z" level=info msg="StartContainer for \"4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94\" returns successfully" Aug 5 22:32:29.526040 systemd[1]: cri-containerd-4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94.scope: Deactivated successfully. Aug 5 22:32:29.564247 containerd[1448]: time="2024-08-05T22:32:29.564173935Z" level=info msg="shim disconnected" id=4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94 namespace=k8s.io Aug 5 22:32:29.564247 containerd[1448]: time="2024-08-05T22:32:29.564243407Z" level=warning msg="cleaning up after shim disconnected" id=4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94 namespace=k8s.io Aug 5 22:32:29.564247 containerd[1448]: time="2024-08-05T22:32:29.564254248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:32:29.809422 systemd[1]: run-containerd-runc-k8s.io-4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94-runc.yNPZ43.mount: Deactivated successfully. Aug 5 22:32:29.809554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a159c3d4a0f03d35c42af5701b7033847c5bc1ccc4de411d801dfff68498f94-rootfs.mount: Deactivated successfully. Aug 5 22:32:30.413121 kubelet[2551]: E0805 22:32:30.413067 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:30.416060 containerd[1448]: time="2024-08-05T22:32:30.416009607Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 5 22:32:30.440724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050620552.mount: Deactivated successfully. Aug 5 22:32:30.443953 containerd[1448]: time="2024-08-05T22:32:30.443909213Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8\"" Aug 5 22:32:30.444622 containerd[1448]: time="2024-08-05T22:32:30.444573001Z" level=info msg="StartContainer for \"a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8\"" Aug 5 22:32:30.486723 systemd[1]: Started cri-containerd-a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8.scope - libcontainer container a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8. Aug 5 22:32:30.623605 systemd[1]: cri-containerd-a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8.scope: Deactivated successfully. Aug 5 22:32:30.627779 containerd[1448]: time="2024-08-05T22:32:30.627738573Z" level=info msg="StartContainer for \"a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8\" returns successfully" Aug 5 22:32:30.782606 containerd[1448]: time="2024-08-05T22:32:30.782382309Z" level=info msg="shim disconnected" id=a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8 namespace=k8s.io Aug 5 22:32:30.782606 containerd[1448]: time="2024-08-05T22:32:30.782487629Z" level=warning msg="cleaning up after shim disconnected" id=a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8 namespace=k8s.io Aug 5 22:32:30.782606 containerd[1448]: time="2024-08-05T22:32:30.782501005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:32:30.809512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6d608dc9794a7b84749059909216470a6d6a3a92991a9f25f9b87a50d64f2f8-rootfs.mount: Deactivated successfully. Aug 5 22:32:31.047703 kubelet[2551]: E0805 22:32:31.047548 2551 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 5 22:32:31.417376 kubelet[2551]: E0805 22:32:31.417337 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:31.419313 containerd[1448]: time="2024-08-05T22:32:31.419242216Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 5 22:32:31.753928 containerd[1448]: time="2024-08-05T22:32:31.753767168Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb\"" Aug 5 22:32:31.754895 containerd[1448]: time="2024-08-05T22:32:31.754836204Z" level=info msg="StartContainer for \"34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb\"" Aug 5 22:32:31.793675 systemd[1]: Started cri-containerd-34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb.scope - libcontainer container 34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb. Aug 5 22:32:31.821308 systemd[1]: cri-containerd-34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb.scope: Deactivated successfully. Aug 5 22:32:31.886697 containerd[1448]: time="2024-08-05T22:32:31.886619787Z" level=info msg="StartContainer for \"34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb\" returns successfully" Aug 5 22:32:31.909885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb-rootfs.mount: Deactivated successfully. Aug 5 22:32:32.016228 containerd[1448]: time="2024-08-05T22:32:32.016017916Z" level=info msg="shim disconnected" id=34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb namespace=k8s.io Aug 5 22:32:32.016228 containerd[1448]: time="2024-08-05T22:32:32.016088400Z" level=warning msg="cleaning up after shim disconnected" id=34090e53707a43e38b8e6398a992ba2a83a56c3d0dc0024b9557858162306efb namespace=k8s.io Aug 5 22:32:32.016228 containerd[1448]: time="2024-08-05T22:32:32.016100142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:32:32.426134 kubelet[2551]: E0805 22:32:32.426070 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:32.428773 containerd[1448]: time="2024-08-05T22:32:32.428710523Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 5 22:32:32.455343 containerd[1448]: time="2024-08-05T22:32:32.455250280Z" level=info msg="CreateContainer within sandbox \"035ff63597eebf65877b83282b9e749f91f2fc8f3deeca052f3cb417608cdd54\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a55c15e0e1ebde8e5322d7bbc651049f9fc6efda58c3d228739380aae6bcfc3\"" Aug 5 22:32:32.456084 containerd[1448]: time="2024-08-05T22:32:32.456028654Z" level=info msg="StartContainer for \"0a55c15e0e1ebde8e5322d7bbc651049f9fc6efda58c3d228739380aae6bcfc3\"" Aug 5 22:32:32.491591 systemd[1]: Started cri-containerd-0a55c15e0e1ebde8e5322d7bbc651049f9fc6efda58c3d228739380aae6bcfc3.scope - libcontainer container 0a55c15e0e1ebde8e5322d7bbc651049f9fc6efda58c3d228739380aae6bcfc3. Aug 5 22:32:32.527090 containerd[1448]: time="2024-08-05T22:32:32.527039068Z" level=info msg="StartContainer for \"0a55c15e0e1ebde8e5322d7bbc651049f9fc6efda58c3d228739380aae6bcfc3\" returns successfully" Aug 5 22:32:32.981479 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 5 22:32:33.431953 kubelet[2551]: E0805 22:32:33.431912 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:33.527460 kubelet[2551]: I0805 22:32:33.523633 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mq6mb" podStartSLOduration=6.523614822 podStartE2EDuration="6.523614822s" podCreationTimestamp="2024-08-05 22:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:32:33.523359208 +0000 UTC m=+107.929820948" watchObservedRunningTime="2024-08-05 22:32:33.523614822 +0000 UTC m=+107.930076552" Aug 5 22:32:33.939706 kubelet[2551]: E0805 22:32:33.939615 2551 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-xmqcs" podUID="63d645a1-cfd4-47cd-bc50-648fd1707296" Aug 5 22:32:34.433677 kubelet[2551]: E0805 22:32:34.433617 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:35.939576 kubelet[2551]: E0805 22:32:35.939502 2551 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-xmqcs" podUID="63d645a1-cfd4-47cd-bc50-648fd1707296" Aug 5 22:32:36.296770 systemd-networkd[1384]: lxc_health: Link UP Aug 5 22:32:36.305888 systemd-networkd[1384]: lxc_health: Gained carrier Aug 5 22:32:36.399362 systemd[1]: run-containerd-runc-k8s.io-0a55c15e0e1ebde8e5322d7bbc651049f9fc6efda58c3d228739380aae6bcfc3-runc.qLbwX8.mount: Deactivated successfully. Aug 5 22:32:37.498633 systemd-networkd[1384]: lxc_health: Gained IPv6LL Aug 5 22:32:37.940043 kubelet[2551]: E0805 22:32:37.939983 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:37.957560 kubelet[2551]: E0805 22:32:37.957494 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:38.442996 kubelet[2551]: E0805 22:32:38.442827 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:39.444994 kubelet[2551]: E0805 22:32:39.444946 2551 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:32:42.840034 sshd[4402]: pam_unix(sshd:session): session closed for user core Aug 5 22:32:42.844497 systemd[1]: sshd@29-10.0.0.102:22-10.0.0.1:42254.service: Deactivated successfully. Aug 5 22:32:42.846617 systemd[1]: session-30.scope: Deactivated successfully. Aug 5 22:32:42.847356 systemd-logind[1431]: Session 30 logged out. Waiting for processes to exit. Aug 5 22:32:42.848367 systemd-logind[1431]: Removed session 30.