Jun 25 18:40:22.925591 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:40:22.925612 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:40:22.925623 kernel: BIOS-provided physical RAM map: Jun 25 18:40:22.925630 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:40:22.925636 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:40:22.925642 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:40:22.925649 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jun 25 18:40:22.925656 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jun 25 18:40:22.925662 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 18:40:22.925671 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:40:22.925677 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 25 18:40:22.925683 kernel: NX (Execute Disable) protection: active Jun 25 18:40:22.925690 kernel: APIC: Static calls initialized Jun 25 18:40:22.925696 kernel: SMBIOS 2.8 present. Jun 25 18:40:22.925704 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 25 18:40:22.925714 kernel: Hypervisor detected: KVM Jun 25 18:40:22.925721 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:40:22.925728 kernel: kvm-clock: using sched offset of 2271001689 cycles Jun 25 18:40:22.925735 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:40:22.925742 kernel: tsc: Detected 2794.750 MHz processor Jun 25 18:40:22.925750 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:40:22.925757 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:40:22.925764 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jun 25 18:40:22.925772 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:40:22.925781 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:40:22.925788 kernel: Using GB pages for direct mapping Jun 25 18:40:22.925795 kernel: ACPI: Early table checksum verification disabled Jun 25 18:40:22.925802 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jun 25 18:40:22.925810 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:40:22.925817 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:40:22.925824 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:40:22.925831 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 25 18:40:22.925838 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:40:22.925848 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:40:22.925855 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:40:22.925862 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jun 25 18:40:22.925869 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jun 25 18:40:22.925876 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 25 18:40:22.925883 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jun 25 18:40:22.925907 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jun 25 18:40:22.925918 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jun 25 18:40:22.925927 kernel: No NUMA configuration found Jun 25 18:40:22.925935 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jun 25 18:40:22.925942 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jun 25 18:40:22.925950 kernel: Zone ranges: Jun 25 18:40:22.925957 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:40:22.925965 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jun 25 18:40:22.925975 kernel: Normal empty Jun 25 18:40:22.925982 kernel: Movable zone start for each node Jun 25 18:40:22.925989 kernel: Early memory node ranges Jun 25 18:40:22.925997 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:40:22.926004 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jun 25 18:40:22.926012 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jun 25 18:40:22.926019 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:40:22.926027 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:40:22.926034 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jun 25 18:40:22.926044 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 18:40:22.926052 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:40:22.926059 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:40:22.926067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 18:40:22.926074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:40:22.926082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:40:22.926089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:40:22.926096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:40:22.926104 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:40:22.926111 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:40:22.926121 kernel: TSC deadline timer available Jun 25 18:40:22.926128 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 18:40:22.926136 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:40:22.926143 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 18:40:22.926151 kernel: kvm-guest: setup PV sched yield Jun 25 18:40:22.926158 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jun 25 18:40:22.926165 kernel: Booting paravirtualized kernel on KVM Jun 25 18:40:22.926173 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:40:22.926181 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 18:40:22.926191 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jun 25 18:40:22.926198 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jun 25 18:40:22.926205 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 18:40:22.926213 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:40:22.926220 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:40:22.926229 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:40:22.926237 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:40:22.926244 kernel: random: crng init done Jun 25 18:40:22.926254 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:40:22.926261 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:40:22.926269 kernel: Fallback order for Node 0: 0 Jun 25 18:40:22.926276 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jun 25 18:40:22.926284 kernel: Policy zone: DMA32 Jun 25 18:40:22.926291 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:40:22.926299 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 143044K reserved, 0K cma-reserved) Jun 25 18:40:22.926306 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:40:22.926314 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:40:22.926324 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:40:22.926331 kernel: Dynamic Preempt: voluntary Jun 25 18:40:22.926339 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:40:22.926347 kernel: rcu: RCU event tracing is enabled. Jun 25 18:40:22.926354 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:40:22.926362 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:40:22.926370 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:40:22.926377 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:40:22.926385 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:40:22.926394 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:40:22.926402 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 18:40:22.926409 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:40:22.926416 kernel: Console: colour VGA+ 80x25 Jun 25 18:40:22.926424 kernel: printk: console [ttyS0] enabled Jun 25 18:40:22.926431 kernel: ACPI: Core revision 20230628 Jun 25 18:40:22.926439 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 18:40:22.926446 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:40:22.926454 kernel: x2apic enabled Jun 25 18:40:22.926463 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:40:22.926471 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 25 18:40:22.926479 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 25 18:40:22.926486 kernel: kvm-guest: setup PV IPIs Jun 25 18:40:22.926493 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 18:40:22.926501 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 18:40:22.926527 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 18:40:22.926534 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 18:40:22.926552 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 18:40:22.926560 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 18:40:22.926567 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:40:22.926575 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:40:22.926585 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:40:22.926593 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:40:22.926601 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 18:40:22.926609 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 18:40:22.926617 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 18:40:22.926627 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 18:40:22.926635 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 25 18:40:22.926643 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 25 18:40:22.926651 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 25 18:40:22.926659 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:40:22.926667 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:40:22.926675 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:40:22.926683 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:40:22.926693 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 18:40:22.926701 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:40:22.926709 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:40:22.926717 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:40:22.926724 kernel: SELinux: Initializing. Jun 25 18:40:22.926732 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:40:22.926740 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:40:22.926748 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 18:40:22.926756 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:40:22.926766 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:40:22.926774 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:40:22.926782 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 18:40:22.926790 kernel: ... version: 0 Jun 25 18:40:22.926798 kernel: ... bit width: 48 Jun 25 18:40:22.926806 kernel: ... generic registers: 6 Jun 25 18:40:22.926813 kernel: ... value mask: 0000ffffffffffff Jun 25 18:40:22.926821 kernel: ... max period: 00007fffffffffff Jun 25 18:40:22.926829 kernel: ... fixed-purpose events: 0 Jun 25 18:40:22.926839 kernel: ... event mask: 000000000000003f Jun 25 18:40:22.926847 kernel: signal: max sigframe size: 1776 Jun 25 18:40:22.926855 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:40:22.926863 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:40:22.926871 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:40:22.926878 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:40:22.926886 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 18:40:22.926900 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:40:22.926910 kernel: smpboot: Max logical packages: 1 Jun 25 18:40:22.926921 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 18:40:22.926929 kernel: devtmpfs: initialized Jun 25 18:40:22.926936 kernel: x86/mm: Memory block size: 128MB Jun 25 18:40:22.926944 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:40:22.926952 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:40:22.926960 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:40:22.926968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:40:22.926976 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:40:22.926984 kernel: audit: type=2000 audit(1719340821.991:1): state=initialized audit_enabled=0 res=1 Jun 25 18:40:22.926994 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:40:22.927002 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:40:22.927009 kernel: cpuidle: using governor menu Jun 25 18:40:22.927017 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:40:22.927025 kernel: dca service started, version 1.12.1 Jun 25 18:40:22.927033 kernel: PCI: Using configuration type 1 for base access Jun 25 18:40:22.927041 kernel: PCI: Using configuration type 1 for extended access Jun 25 18:40:22.927048 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:40:22.927056 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:40:22.927066 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:40:22.927074 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:40:22.927082 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:40:22.927090 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:40:22.927098 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:40:22.927105 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:40:22.927113 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:40:22.927121 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:40:22.927129 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:40:22.927139 kernel: ACPI: Interpreter enabled Jun 25 18:40:22.927146 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 18:40:22.927154 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:40:22.927162 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:40:22.927170 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:40:22.927178 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 18:40:22.927185 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:40:22.927362 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:40:22.927377 kernel: acpiphp: Slot [3] registered Jun 25 18:40:22.927385 kernel: acpiphp: Slot [4] registered Jun 25 18:40:22.927393 kernel: acpiphp: Slot [5] registered Jun 25 18:40:22.927401 kernel: acpiphp: Slot [6] registered Jun 25 18:40:22.927408 kernel: acpiphp: Slot [7] registered Jun 25 18:40:22.927416 kernel: acpiphp: Slot [8] registered Jun 25 18:40:22.927424 kernel: acpiphp: Slot [9] registered Jun 25 18:40:22.927432 kernel: acpiphp: Slot [10] registered Jun 25 18:40:22.927439 kernel: acpiphp: Slot [11] registered Jun 25 18:40:22.927447 kernel: acpiphp: Slot [12] registered Jun 25 18:40:22.927457 kernel: acpiphp: Slot [13] registered Jun 25 18:40:22.927465 kernel: acpiphp: Slot [14] registered Jun 25 18:40:22.927472 kernel: acpiphp: Slot [15] registered Jun 25 18:40:22.927480 kernel: acpiphp: Slot [16] registered Jun 25 18:40:22.927488 kernel: acpiphp: Slot [17] registered Jun 25 18:40:22.927495 kernel: acpiphp: Slot [18] registered Jun 25 18:40:22.927539 kernel: acpiphp: Slot [19] registered Jun 25 18:40:22.927547 kernel: acpiphp: Slot [20] registered Jun 25 18:40:22.927555 kernel: acpiphp: Slot [21] registered Jun 25 18:40:22.927565 kernel: acpiphp: Slot [22] registered Jun 25 18:40:22.927573 kernel: acpiphp: Slot [23] registered Jun 25 18:40:22.927581 kernel: acpiphp: Slot [24] registered Jun 25 18:40:22.927588 kernel: acpiphp: Slot [25] registered Jun 25 18:40:22.927596 kernel: acpiphp: Slot [26] registered Jun 25 18:40:22.927604 kernel: acpiphp: Slot [27] registered Jun 25 18:40:22.927611 kernel: acpiphp: Slot [28] registered Jun 25 18:40:22.927619 kernel: acpiphp: Slot [29] registered Jun 25 18:40:22.927627 kernel: acpiphp: Slot [30] registered Jun 25 18:40:22.927634 kernel: acpiphp: Slot [31] registered Jun 25 18:40:22.927644 kernel: PCI host bridge to bus 0000:00 Jun 25 18:40:22.927780 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:40:22.927901 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:40:22.928041 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:40:22.928170 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 18:40:22.928279 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 18:40:22.928387 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:40:22.928543 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:40:22.928676 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:40:22.928806 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 18:40:22.928964 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 18:40:22.929122 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 18:40:22.929249 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 18:40:22.929374 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 18:40:22.929493 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 18:40:22.929639 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 18:40:22.929759 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 18:40:22.929878 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 18:40:22.930034 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 18:40:22.930160 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jun 25 18:40:22.930280 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jun 25 18:40:22.930399 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jun 25 18:40:22.930534 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:40:22.930667 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:40:22.930786 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 18:40:22.930917 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jun 25 18:40:22.931042 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jun 25 18:40:22.931197 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 18:40:22.931321 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 18:40:22.931440 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jun 25 18:40:22.931582 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jun 25 18:40:22.931713 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 18:40:22.931834 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 18:40:22.931971 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jun 25 18:40:22.932096 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 25 18:40:22.932216 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jun 25 18:40:22.932227 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:40:22.932235 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:40:22.932243 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:40:22.932251 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:40:22.932259 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:40:22.932267 kernel: iommu: Default domain type: Translated Jun 25 18:40:22.932278 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:40:22.932286 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:40:22.932294 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:40:22.932302 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:40:22.932310 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jun 25 18:40:22.932432 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 18:40:22.932591 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 18:40:22.932710 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:40:22.932724 kernel: vgaarb: loaded Jun 25 18:40:22.932732 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 18:40:22.932740 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 18:40:22.932748 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:40:22.932756 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:40:22.932764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:40:22.932772 kernel: pnp: PnP ACPI init Jun 25 18:40:22.932906 kernel: pnp 00:02: [dma 2] Jun 25 18:40:22.932921 kernel: pnp: PnP ACPI: found 6 devices Jun 25 18:40:22.932930 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:40:22.932938 kernel: NET: Registered PF_INET protocol family Jun 25 18:40:22.932946 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:40:22.932954 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:40:22.932962 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:40:22.932970 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:40:22.932978 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:40:22.932986 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:40:22.932997 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:40:22.933004 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:40:22.933012 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:40:22.933020 kernel: NET: Registered PF_XDP protocol family Jun 25 18:40:22.933132 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:40:22.933242 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:40:22.933352 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:40:22.933461 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 18:40:22.933584 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 18:40:22.933711 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 18:40:22.933833 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:40:22.933844 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:40:22.933852 kernel: Initialise system trusted keyrings Jun 25 18:40:22.933860 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:40:22.933868 kernel: Key type asymmetric registered Jun 25 18:40:22.933876 kernel: Asymmetric key parser 'x509' registered Jun 25 18:40:22.933884 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:40:22.933904 kernel: io scheduler mq-deadline registered Jun 25 18:40:22.933922 kernel: io scheduler kyber registered Jun 25 18:40:22.933930 kernel: io scheduler bfq registered Jun 25 18:40:22.933938 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:40:22.933946 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 18:40:22.933961 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 18:40:22.933969 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 18:40:22.933985 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:40:22.934000 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:40:22.934019 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:40:22.934034 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:40:22.934050 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:40:22.934247 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 18:40:22.934267 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:40:22.934390 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 18:40:22.934571 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T18:40:22 UTC (1719340822) Jun 25 18:40:22.934689 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 18:40:22.934705 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 18:40:22.934712 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:40:22.934720 kernel: Segment Routing with IPv6 Jun 25 18:40:22.934728 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:40:22.934736 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:40:22.934744 kernel: Key type dns_resolver registered Jun 25 18:40:22.934752 kernel: IPI shorthand broadcast: enabled Jun 25 18:40:22.934760 kernel: sched_clock: Marking stable (785006949, 103583318)->(909984391, -21394124) Jun 25 18:40:22.934768 kernel: registered taskstats version 1 Jun 25 18:40:22.934778 kernel: Loading compiled-in X.509 certificates Jun 25 18:40:22.934786 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:40:22.934794 kernel: Key type .fscrypt registered Jun 25 18:40:22.934802 kernel: Key type fscrypt-provisioning registered Jun 25 18:40:22.934809 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:40:22.934817 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:40:22.934825 kernel: ima: No architecture policies found Jun 25 18:40:22.934833 kernel: clk: Disabling unused clocks Jun 25 18:40:22.934843 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:40:22.934851 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:40:22.934859 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:40:22.934867 kernel: Run /init as init process Jun 25 18:40:22.934875 kernel: with arguments: Jun 25 18:40:22.934883 kernel: /init Jun 25 18:40:22.934899 kernel: with environment: Jun 25 18:40:22.934908 kernel: HOME=/ Jun 25 18:40:22.934931 kernel: TERM=linux Jun 25 18:40:22.934942 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:40:22.934954 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:40:22.934964 systemd[1]: Detected virtualization kvm. Jun 25 18:40:22.934973 systemd[1]: Detected architecture x86-64. Jun 25 18:40:22.934982 systemd[1]: Running in initrd. Jun 25 18:40:22.934990 systemd[1]: No hostname configured, using default hostname. Jun 25 18:40:22.934998 systemd[1]: Hostname set to . Jun 25 18:40:22.935009 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:40:22.935018 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:40:22.935027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:40:22.935036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:40:22.935045 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:40:22.935054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:40:22.935063 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:40:22.935071 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:40:22.935084 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:40:22.935093 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:40:22.935102 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:40:22.935110 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:40:22.935119 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:40:22.935127 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:40:22.935136 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:40:22.935146 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:40:22.935155 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:40:22.935164 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:40:22.935172 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:40:22.935181 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:40:22.935190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:40:22.935198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:40:22.935207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:40:22.935215 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:40:22.935226 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:40:22.935235 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:40:22.935244 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:40:22.935252 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:40:22.935261 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:40:22.935272 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:40:22.935280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:40:22.935289 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:40:22.935298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:40:22.935306 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:40:22.935333 systemd-journald[191]: Collecting audit messages is disabled. Jun 25 18:40:22.935354 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:40:22.935363 systemd-journald[191]: Journal started Jun 25 18:40:22.935383 systemd-journald[191]: Runtime Journal (/run/log/journal/60c759ad23234fb980db218bd5fa622a) is 6.0M, max 48.4M, 42.3M free. Jun 25 18:40:22.936237 systemd-modules-load[193]: Inserted module 'overlay' Jun 25 18:40:22.976233 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:40:22.976273 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:40:22.976292 kernel: Bridge firewalling registered Jun 25 18:40:22.969415 systemd-modules-load[193]: Inserted module 'br_netfilter' Jun 25 18:40:22.976500 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:40:22.977272 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:40:22.991724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:40:22.992809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:40:22.995473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:40:23.000323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:40:23.001984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:40:23.012491 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:40:23.019050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:40:23.020113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:40:23.029907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:40:23.031585 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:40:23.035334 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:40:23.055427 dracut-cmdline[232]: dracut-dracut-053 Jun 25 18:40:23.059522 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:40:23.067540 systemd-resolved[229]: Positive Trust Anchors: Jun 25 18:40:23.067560 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:40:23.067601 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:40:23.071016 systemd-resolved[229]: Defaulting to hostname 'linux'. Jun 25 18:40:23.072553 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:40:23.078922 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:40:23.169574 kernel: SCSI subsystem initialized Jun 25 18:40:23.181544 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:40:23.195565 kernel: iscsi: registered transport (tcp) Jun 25 18:40:23.222568 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:40:23.222663 kernel: QLogic iSCSI HBA Driver Jun 25 18:40:23.267498 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:40:23.278667 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:40:23.305623 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:40:23.305692 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:40:23.306859 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:40:23.353562 kernel: raid6: avx2x4 gen() 28958 MB/s Jun 25 18:40:23.370552 kernel: raid6: avx2x2 gen() 25075 MB/s Jun 25 18:40:23.387923 kernel: raid6: avx2x1 gen() 16255 MB/s Jun 25 18:40:23.387992 kernel: raid6: using algorithm avx2x4 gen() 28958 MB/s Jun 25 18:40:23.405905 kernel: raid6: .... xor() 5761 MB/s, rmw enabled Jun 25 18:40:23.405968 kernel: raid6: using avx2x2 recovery algorithm Jun 25 18:40:23.437553 kernel: xor: automatically using best checksumming function avx Jun 25 18:40:23.626544 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:40:23.640128 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:40:23.650765 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:40:23.670102 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jun 25 18:40:23.676965 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:40:23.683720 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:40:23.703089 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jun 25 18:40:23.741940 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:40:23.750913 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:40:23.820894 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:40:23.829033 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:40:23.844145 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:40:23.847013 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:40:23.849729 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:40:23.852376 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:40:23.862697 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:40:23.872527 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 18:40:23.895581 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:40:23.895631 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:40:23.895955 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:40:23.895977 kernel: GPT:9289727 != 19775487 Jun 25 18:40:23.895997 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:40:23.896017 kernel: GPT:9289727 != 19775487 Jun 25 18:40:23.896036 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:40:23.896077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:40:23.886679 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:40:23.899536 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:40:23.899567 kernel: AES CTR mode by8 optimization enabled Jun 25 18:40:23.901626 kernel: libata version 3.00 loaded. Jun 25 18:40:23.903371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:40:23.903491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:40:23.907580 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:40:23.911174 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 18:40:23.929076 kernel: scsi host0: ata_piix Jun 25 18:40:23.929301 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (481) Jun 25 18:40:23.929315 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (462) Jun 25 18:40:23.929326 kernel: scsi host1: ata_piix Jun 25 18:40:23.929487 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 18:40:23.929500 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 18:40:23.913745 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:40:23.913933 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:40:23.917533 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:40:23.927272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:40:23.942627 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:40:23.948194 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:40:23.991920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:40:24.003919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:40:24.010882 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:40:24.014851 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:40:24.027801 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:40:24.031676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:40:24.040085 disk-uuid[542]: Primary Header is updated. Jun 25 18:40:24.040085 disk-uuid[542]: Secondary Entries is updated. Jun 25 18:40:24.040085 disk-uuid[542]: Secondary Header is updated. Jun 25 18:40:24.044605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:40:24.050543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:40:24.054935 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:40:24.085206 kernel: ata2: found unknown device (class 0) Jun 25 18:40:24.085273 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 18:40:24.088666 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 18:40:24.138741 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 18:40:24.153009 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:40:24.153043 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 18:40:25.058526 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:40:25.058665 disk-uuid[544]: The operation has completed successfully. Jun 25 18:40:25.090741 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:40:25.090879 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:40:25.137739 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:40:25.157285 sh[579]: Success Jun 25 18:40:25.194585 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 18:40:25.226851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:40:25.249918 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:40:25.252714 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:40:25.277327 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:40:25.277359 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:40:25.277371 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:40:25.279120 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:40:25.279138 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:40:25.283799 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:40:25.286467 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:40:25.300683 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:40:25.320682 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:40:25.325313 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:40:25.325337 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:40:25.325348 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:40:25.328548 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:40:25.337127 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:40:25.340524 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:40:25.425966 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:40:25.440699 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:40:25.478640 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:40:25.481447 systemd-networkd[757]: lo: Link UP Jun 25 18:40:25.481460 systemd-networkd[757]: lo: Gained carrier Jun 25 18:40:25.483434 systemd-networkd[757]: Enumeration completed Jun 25 18:40:25.483886 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:40:25.483890 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:40:25.484786 systemd-networkd[757]: eth0: Link UP Jun 25 18:40:25.484791 systemd-networkd[757]: eth0: Gained carrier Jun 25 18:40:25.484798 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:40:25.484883 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:40:25.485528 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:40:25.486100 systemd[1]: Reached target network.target - Network. Jun 25 18:40:25.504862 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:40:25.547582 ignition[761]: Ignition 2.19.0 Jun 25 18:40:25.547596 ignition[761]: Stage: fetch-offline Jun 25 18:40:25.547658 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:40:25.547673 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:40:25.547795 ignition[761]: parsed url from cmdline: "" Jun 25 18:40:25.547800 ignition[761]: no config URL provided Jun 25 18:40:25.547807 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:40:25.547818 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:40:25.547862 ignition[761]: op(1): [started] loading QEMU firmware config module Jun 25 18:40:25.547880 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:40:25.554454 ignition[761]: op(1): [finished] loading QEMU firmware config module Jun 25 18:40:25.597803 ignition[761]: parsing config with SHA512: 2a641c0abdf55f3e5f9bf9516d6cd93f140614214406985da634860053fdf0297677319437622cdf510fa0e7641fafa042d35ffa0918f025524e20bf39768952 Jun 25 18:40:25.602494 unknown[761]: fetched base config from "system" Jun 25 18:40:25.602538 unknown[761]: fetched user config from "qemu" Jun 25 18:40:25.603029 ignition[761]: fetch-offline: fetch-offline passed Jun 25 18:40:25.604963 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.103 Jun 25 18:40:25.603099 ignition[761]: Ignition finished successfully Jun 25 18:40:25.604974 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jun 25 18:40:25.611107 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:40:25.612142 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:40:25.617928 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:40:25.635638 ignition[772]: Ignition 2.19.0 Jun 25 18:40:25.635650 ignition[772]: Stage: kargs Jun 25 18:40:25.635846 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:40:25.635859 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:40:25.636804 ignition[772]: kargs: kargs passed Jun 25 18:40:25.636872 ignition[772]: Ignition finished successfully Jun 25 18:40:25.640769 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:40:25.655846 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:40:25.669112 ignition[781]: Ignition 2.19.0 Jun 25 18:40:25.669124 ignition[781]: Stage: disks Jun 25 18:40:25.669302 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:40:25.669313 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:40:25.670227 ignition[781]: disks: disks passed Jun 25 18:40:25.672734 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:40:25.670276 ignition[781]: Ignition finished successfully Jun 25 18:40:25.674065 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:40:25.675611 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:40:25.677791 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:40:25.678922 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:40:25.681086 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:40:25.698708 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:40:25.712767 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:40:25.720974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:40:25.738754 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:40:25.850538 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:40:25.851187 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:40:25.852604 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:40:25.863701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:40:25.865855 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:40:25.868439 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:40:25.868521 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:40:25.877940 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Jun 25 18:40:25.877972 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:40:25.877987 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:40:25.878000 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:40:25.878014 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:40:25.868555 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:40:25.882457 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:40:25.902764 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:40:25.910701 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:40:25.948107 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:40:25.952119 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:40:25.957009 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:40:25.961963 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:40:26.046687 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:40:26.060680 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:40:26.062737 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:40:26.069528 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:40:26.090141 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:40:26.100763 ignition[913]: INFO : Ignition 2.19.0 Jun 25 18:40:26.100763 ignition[913]: INFO : Stage: mount Jun 25 18:40:26.102630 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:40:26.102630 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:40:26.105232 ignition[913]: INFO : mount: mount passed Jun 25 18:40:26.106008 ignition[913]: INFO : Ignition finished successfully Jun 25 18:40:26.108452 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:40:26.125670 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:40:26.262661 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:40:26.277841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:40:26.284541 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jun 25 18:40:26.286704 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:40:26.286745 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:40:26.286759 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:40:26.290518 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:40:26.292202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:40:26.321232 ignition[945]: INFO : Ignition 2.19.0 Jun 25 18:40:26.321232 ignition[945]: INFO : Stage: files Jun 25 18:40:26.321232 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:40:26.321232 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:40:26.326207 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:40:26.326207 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:40:26.326207 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:40:26.326207 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:40:26.326207 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:40:26.326207 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:40:26.325573 unknown[945]: wrote ssh authorized keys file for user: core Jun 25 18:40:26.337112 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:40:26.337112 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:40:26.355967 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:40:26.481108 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:40:26.481108 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:40:26.485663 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 25 18:40:26.855592 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:40:26.982139 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:40:26.982139 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:40:26.986540 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:40:27.049752 systemd-networkd[757]: eth0: Gained IPv6LL Jun 25 18:40:27.298336 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:40:27.682967 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:40:27.682967 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 18:40:27.686681 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:40:27.686681 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:40:27.686681 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 18:40:27.686681 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 25 18:40:27.686681 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:40:27.686681 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:40:27.686681 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 25 18:40:27.686681 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:40:27.720972 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:40:27.726225 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:40:27.728079 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:40:27.728079 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:40:27.728079 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:40:27.728079 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:40:27.728079 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:40:27.728079 ignition[945]: INFO : files: files passed Jun 25 18:40:27.728079 ignition[945]: INFO : Ignition finished successfully Jun 25 18:40:27.740927 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:40:27.754706 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:40:27.756097 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:40:27.759012 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:40:27.759126 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:40:27.765236 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:40:27.767450 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:40:27.767450 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:40:27.772349 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:40:27.770136 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:40:27.772588 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:40:27.786663 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:40:27.812737 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:40:27.812877 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:40:27.813580 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:40:27.817462 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:40:27.817937 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:40:27.818686 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:40:27.835652 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:40:27.838346 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:40:27.851708 systemd[1]: Stopped target network.target - Network. Jun 25 18:40:27.852963 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:40:27.855214 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:40:27.855945 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:40:27.856318 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:40:27.856435 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:40:27.857112 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:40:27.857291 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:40:27.857488 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:40:27.857895 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:40:27.858094 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:40:27.858310 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:40:27.858519 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:40:27.858720 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:40:27.858931 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:40:27.859125 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:40:27.859306 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:40:27.859409 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:40:27.859728 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:40:27.860141 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:40:27.860320 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:40:27.860428 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:40:27.860910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:40:27.919911 ignition[1000]: INFO : Ignition 2.19.0 Jun 25 18:40:27.919911 ignition[1000]: INFO : Stage: umount Jun 25 18:40:27.919911 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:40:27.919911 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:40:27.919911 ignition[1000]: INFO : umount: umount passed Jun 25 18:40:27.919911 ignition[1000]: INFO : Ignition finished successfully Jun 25 18:40:27.861010 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:40:27.861298 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:40:27.861402 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:40:27.861789 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:40:27.861909 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:40:27.865713 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:40:27.866202 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:40:27.866393 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:40:27.866795 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:40:27.866882 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:40:27.867151 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:40:27.867230 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:40:27.867691 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:40:27.867802 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:40:27.868287 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:40:27.868381 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:40:27.897727 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:40:27.900339 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:40:27.901778 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:40:27.903817 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:40:27.905326 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:40:27.905536 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:40:27.907613 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:40:27.907716 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:40:27.911473 systemd-networkd[757]: eth0: DHCPv6 lease lost Jun 25 18:40:27.912765 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:40:27.912942 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:40:27.916290 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:40:27.916502 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:40:27.918891 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:40:27.919095 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:40:27.927585 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:40:27.929161 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:40:27.929228 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:40:27.932614 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:40:27.932668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:40:27.935156 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:40:27.935207 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:40:27.936974 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:40:27.937033 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:40:27.941103 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:40:27.941780 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:40:27.941889 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:40:27.943964 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:40:27.944067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:40:27.949273 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:40:27.949375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:40:27.951042 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:40:27.951092 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:40:27.953290 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:40:27.953340 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:40:27.955518 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:40:27.955567 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:40:27.958102 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:40:27.961088 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:40:27.961218 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:40:27.977976 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:40:27.978174 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:40:27.980068 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:40:27.980121 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:40:27.982326 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:40:27.982367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:40:27.984661 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:40:27.984713 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:40:27.987238 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:40:27.987285 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:40:27.989808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:40:27.989885 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:40:28.002733 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:40:28.016627 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:40:28.016693 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:40:28.032930 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:40:28.032980 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:40:28.035822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:40:28.035871 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:40:28.037324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:40:28.037376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:40:28.039862 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:40:28.039975 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:40:28.124368 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:40:28.125410 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:40:28.127532 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:40:28.129684 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:40:28.129743 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:40:28.147667 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:40:28.157659 systemd[1]: Switching root. Jun 25 18:40:28.187943 systemd-journald[191]: Journal stopped Jun 25 18:40:29.582631 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jun 25 18:40:29.582699 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:40:29.582724 kernel: SELinux: policy capability open_perms=1 Jun 25 18:40:29.582752 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:40:29.582768 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:40:29.582791 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:40:29.582807 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:40:29.582825 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:40:29.582840 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:40:29.582856 kernel: audit: type=1403 audit(1719340828.677:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:40:29.582873 systemd[1]: Successfully loaded SELinux policy in 68.062ms. Jun 25 18:40:29.582908 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.481ms. Jun 25 18:40:29.582929 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:40:29.582947 systemd[1]: Detected virtualization kvm. Jun 25 18:40:29.582969 systemd[1]: Detected architecture x86-64. Jun 25 18:40:29.582985 systemd[1]: Detected first boot. Jun 25 18:40:29.583003 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:40:29.583019 zram_generator::config[1045]: No configuration found. Jun 25 18:40:29.583043 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:40:29.583060 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:40:29.583080 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:40:29.583097 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:40:29.583114 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:40:29.583132 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:40:29.583149 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:40:29.583166 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:40:29.583184 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:40:29.583201 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:40:29.583217 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:40:29.583236 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:40:29.583252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:40:29.583269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:40:29.583286 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:40:29.583302 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:40:29.583319 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:40:29.583336 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:40:29.583353 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:40:29.583370 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:40:29.583390 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:40:29.583408 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:40:29.583426 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:40:29.583443 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:40:29.583459 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:40:29.583477 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:40:29.583493 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:40:29.583532 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:40:29.583551 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:40:29.583570 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:40:29.583588 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:40:29.583605 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:40:29.583623 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:40:29.583640 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:40:29.583656 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:40:29.583673 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:40:29.583689 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:40:29.583709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:40:29.583736 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:40:29.583753 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:40:29.583771 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:40:29.583788 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:40:29.583805 systemd[1]: Reached target machines.target - Containers. Jun 25 18:40:29.583822 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:40:29.583839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:40:29.583860 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:40:29.583877 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:40:29.583895 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:40:29.583912 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:40:29.583929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:40:29.583947 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:40:29.583964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:40:29.583981 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:40:29.584001 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:40:29.584019 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:40:29.584036 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:40:29.584053 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:40:29.584071 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:40:29.584088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:40:29.584105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:40:29.584121 kernel: loop: module loaded Jun 25 18:40:29.584139 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:40:29.584158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:40:29.584197 systemd-journald[1107]: Collecting audit messages is disabled. Jun 25 18:40:29.584226 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:40:29.584243 systemd-journald[1107]: Journal started Jun 25 18:40:29.584274 systemd-journald[1107]: Runtime Journal (/run/log/journal/60c759ad23234fb980db218bd5fa622a) is 6.0M, max 48.4M, 42.3M free. Jun 25 18:40:29.584325 systemd[1]: Stopped verity-setup.service. Jun 25 18:40:29.309691 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:40:29.329444 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:40:29.329904 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:40:29.587532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:40:29.606481 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:40:29.606576 kernel: fuse: init (API version 7.39) Jun 25 18:40:29.608855 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:40:29.610342 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:40:29.611807 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:40:29.613264 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:40:29.614805 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:40:29.616439 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:40:29.618095 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:40:29.620098 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:40:29.620338 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:40:29.622184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:40:29.622374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:40:29.624151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:40:29.624369 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:40:29.626284 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:40:29.626529 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:40:29.628236 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:40:29.628460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:40:29.630216 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:40:29.631946 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:40:29.633839 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:40:29.637523 kernel: ACPI: bus type drm_connector registered Jun 25 18:40:29.640362 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:40:29.640594 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:40:29.655420 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:40:29.679742 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:40:29.682768 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:40:29.684159 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:40:29.684192 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:40:29.686502 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:40:29.689293 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:40:29.694274 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:40:29.695790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:40:29.698880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:40:29.702018 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:40:29.703663 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:40:29.705166 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:40:29.706563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:40:29.715944 systemd-journald[1107]: Time spent on flushing to /var/log/journal/60c759ad23234fb980db218bd5fa622a is 22.408ms for 949 entries. Jun 25 18:40:29.715944 systemd-journald[1107]: System Journal (/var/log/journal/60c759ad23234fb980db218bd5fa622a) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:40:29.972403 systemd-journald[1107]: Received client request to flush runtime journal. Jun 25 18:40:29.972472 kernel: loop0: detected capacity change from 0 to 209816 Jun 25 18:40:29.972527 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:40:29.972669 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:40:29.972697 kernel: loop1: detected capacity change from 0 to 139760 Jun 25 18:40:29.972736 kernel: loop2: detected capacity change from 0 to 80568 Jun 25 18:40:29.712176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:40:29.720944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:40:29.729553 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:40:29.732812 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:40:29.974601 kernel: loop3: detected capacity change from 0 to 209816 Jun 25 18:40:29.734267 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:40:29.750634 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:40:29.752598 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:40:29.772753 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:40:29.776988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:40:29.791533 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jun 25 18:40:29.791549 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jun 25 18:40:29.799060 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:40:29.802741 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:40:29.870951 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:40:29.881736 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:40:29.917443 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:40:29.931703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:40:29.948095 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jun 25 18:40:29.948109 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jun 25 18:40:29.953470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:40:29.958860 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:40:29.960278 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:40:29.970785 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:40:29.975618 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:40:29.996533 kernel: loop4: detected capacity change from 0 to 139760 Jun 25 18:40:29.996965 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:40:29.997673 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:40:30.008531 kernel: loop5: detected capacity change from 0 to 80568 Jun 25 18:40:30.012946 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:40:30.013633 (sd-merge)[1178]: Merged extensions into '/usr'. Jun 25 18:40:30.019448 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:40:30.019464 systemd[1]: Reloading... Jun 25 18:40:30.089563 zram_generator::config[1209]: No configuration found. Jun 25 18:40:30.187350 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:40:30.211527 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:40:30.263227 systemd[1]: Reloading finished in 243 ms. Jun 25 18:40:30.295293 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:40:30.296911 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:40:30.312728 systemd[1]: Starting ensure-sysext.service... Jun 25 18:40:30.315486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:40:30.320897 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:40:30.320914 systemd[1]: Reloading... Jun 25 18:40:30.341737 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:40:30.342092 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:40:30.343123 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:40:30.343443 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jun 25 18:40:30.343558 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jun 25 18:40:30.346929 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:40:30.347077 systemd-tmpfiles[1247]: Skipping /boot Jun 25 18:40:30.364259 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:40:30.364277 systemd-tmpfiles[1247]: Skipping /boot Jun 25 18:40:30.374547 zram_generator::config[1272]: No configuration found. Jun 25 18:40:30.501827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:40:30.551856 systemd[1]: Reloading finished in 230 ms. Jun 25 18:40:30.569145 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:40:30.594910 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:40:30.609815 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:40:30.613050 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:40:30.617821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:40:30.623297 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:40:30.630787 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:40:30.649481 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:40:30.649719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:40:30.651587 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:40:30.658639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:40:30.661097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:40:30.664165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:40:30.664333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:40:30.665646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:40:30.665852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:40:30.667585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:40:30.667759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:40:30.669672 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:40:30.672523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:40:30.681805 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:40:30.682009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:40:30.685957 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:40:30.695890 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:40:30.698040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:40:30.698287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:40:30.698364 augenrules[1343]: No rules Jun 25 18:40:30.705890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:40:30.722892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:40:30.724845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:40:30.729649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:40:30.731074 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:40:30.732755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:40:30.735743 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:40:30.736802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:40:30.737665 systemd[1]: Finished ensure-sysext.service. Jun 25 18:40:30.739053 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:40:30.759043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:40:30.759242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:40:30.761029 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:40:30.761207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:40:30.762694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:40:30.762883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:40:30.764537 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:40:30.766311 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:40:30.766550 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:40:30.768344 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:40:30.777070 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jun 25 18:40:30.777579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:40:30.777667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:40:30.778207 systemd-resolved[1317]: Positive Trust Anchors: Jun 25 18:40:30.778226 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:40:30.778257 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:40:30.782067 systemd-resolved[1317]: Defaulting to hostname 'linux'. Jun 25 18:40:30.782694 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:40:30.784045 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:40:30.784164 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:40:30.785613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:40:30.800649 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:40:30.810799 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:40:30.841882 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:40:30.859811 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:40:30.862563 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1366) Jun 25 18:40:30.862928 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:40:30.868275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1382) Jun 25 18:40:30.879802 systemd-networkd[1368]: lo: Link UP Jun 25 18:40:30.879816 systemd-networkd[1368]: lo: Gained carrier Jun 25 18:40:30.881424 systemd-networkd[1368]: Enumeration completed Jun 25 18:40:30.881848 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:40:30.881859 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:40:30.882102 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:40:30.888169 systemd-networkd[1368]: eth0: Link UP Jun 25 18:40:30.888181 systemd-networkd[1368]: eth0: Gained carrier Jun 25 18:40:30.888194 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:40:30.896105 systemd[1]: Reached target network.target - Network. Jun 25 18:40:30.903714 systemd-networkd[1368]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:40:30.905272 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Jun 25 18:40:30.906070 systemd-timesyncd[1363]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:40:30.906110 systemd-timesyncd[1363]: Initial clock synchronization to Tue 2024-06-25 18:40:31.180326 UTC. Jun 25 18:40:30.909367 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:40:30.911748 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:40:30.915011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:40:30.920719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:40:30.932590 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 18:40:30.935534 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 18:40:30.946741 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:40:30.946760 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 18:40:30.946958 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:40:30.987036 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:40:30.988051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:40:31.087498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:40:31.112818 kernel: kvm_amd: TSC scaling supported Jun 25 18:40:31.112879 kernel: kvm_amd: Nested Virtualization enabled Jun 25 18:40:31.112897 kernel: kvm_amd: Nested Paging enabled Jun 25 18:40:31.113840 kernel: kvm_amd: LBR virtualization supported Jun 25 18:40:31.113855 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 25 18:40:31.114932 kernel: kvm_amd: Virtual GIF supported Jun 25 18:40:31.137542 kernel: EDAC MC: Ver: 3.0.0 Jun 25 18:40:31.178119 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:40:31.192725 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:40:31.200952 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:40:31.227903 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:40:31.230659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:40:31.242272 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:40:31.243633 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:40:31.245003 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:40:31.246570 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:40:31.248081 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:40:31.249413 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:40:31.250717 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:40:31.250751 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:40:31.251701 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:40:31.254124 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:40:31.257454 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:40:31.273028 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:40:31.276105 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:40:31.278226 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:40:31.279595 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:40:31.280718 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:40:31.281842 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:40:31.281872 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:40:31.283111 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:40:31.285474 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:40:31.288659 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:40:31.294703 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:40:31.295943 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:40:31.296921 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:40:31.298819 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:40:31.299590 jq[1416]: false Jun 25 18:40:31.304724 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:40:31.309716 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:40:31.313821 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:40:31.324383 extend-filesystems[1417]: Found loop3 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found loop4 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found loop5 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found sr0 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found vda Jun 25 18:40:31.332659 extend-filesystems[1417]: Found vda1 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found vda2 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found vda3 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found usr Jun 25 18:40:31.332659 extend-filesystems[1417]: Found vda4 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found vda6 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found vda7 Jun 25 18:40:31.332659 extend-filesystems[1417]: Found vda9 Jun 25 18:40:31.332659 extend-filesystems[1417]: Checking size of /dev/vda9 Jun 25 18:40:31.363268 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:40:31.330130 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:40:31.363472 extend-filesystems[1417]: Resized partition /dev/vda9 Jun 25 18:40:31.342568 dbus-daemon[1415]: [system] SELinux support is enabled Jun 25 18:40:31.331917 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:40:31.366255 extend-filesystems[1437]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:40:31.373720 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1374) Jun 25 18:40:31.332481 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:40:31.373959 jq[1436]: true Jun 25 18:40:31.335760 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:40:31.339735 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:40:31.344136 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:40:31.347326 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:40:31.362732 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:40:31.362991 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:40:31.363379 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:40:31.365176 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:40:31.369930 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:40:31.370158 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:40:31.375334 update_engine[1431]: I0625 18:40:31.375258 1431 main.cc:92] Flatcar Update Engine starting Jun 25 18:40:31.386189 update_engine[1431]: I0625 18:40:31.386026 1431 update_check_scheduler.cc:74] Next update check in 8m46s Jun 25 18:40:31.390958 jq[1442]: true Jun 25 18:40:31.392258 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:40:31.397955 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:40:31.414679 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:40:31.414679 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:40:31.414679 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:40:31.421937 extend-filesystems[1417]: Resized filesystem in /dev/vda9 Jun 25 18:40:31.416729 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:40:31.416926 systemd-logind[1428]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:40:31.416947 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:40:31.416995 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:40:31.417231 systemd-logind[1428]: New seat seat0. Jun 25 18:40:31.426495 tar[1440]: linux-amd64/helm Jun 25 18:40:31.427817 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:40:31.435162 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:40:31.438338 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:40:31.440340 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:40:31.440480 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:40:31.445766 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:40:31.445927 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:40:31.455757 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:40:31.474684 bash[1472]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:40:31.480333 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:40:31.485266 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:40:31.489783 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:40:31.623024 containerd[1443]: time="2024-06-25T18:40:31.621861080Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:40:31.646232 containerd[1443]: time="2024-06-25T18:40:31.645973639Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:40:31.646232 containerd[1443]: time="2024-06-25T18:40:31.646020539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648140 containerd[1443]: time="2024-06-25T18:40:31.648087862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648140 containerd[1443]: time="2024-06-25T18:40:31.648135074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648442 containerd[1443]: time="2024-06-25T18:40:31.648417822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648442 containerd[1443]: time="2024-06-25T18:40:31.648439593Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:40:31.648572 containerd[1443]: time="2024-06-25T18:40:31.648554552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648639 containerd[1443]: time="2024-06-25T18:40:31.648621835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648672 containerd[1443]: time="2024-06-25T18:40:31.648638913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648741 containerd[1443]: time="2024-06-25T18:40:31.648725801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648991 containerd[1443]: time="2024-06-25T18:40:31.648963721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:40:31.648991 containerd[1443]: time="2024-06-25T18:40:31.648986571Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:40:31.649040 containerd[1443]: time="2024-06-25T18:40:31.648996891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:40:31.649142 containerd[1443]: time="2024-06-25T18:40:31.649116504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:40:31.649142 containerd[1443]: time="2024-06-25T18:40:31.649134037Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:40:31.649235 containerd[1443]: time="2024-06-25T18:40:31.649213289Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:40:31.649235 containerd[1443]: time="2024-06-25T18:40:31.649229796Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:40:31.655785 containerd[1443]: time="2024-06-25T18:40:31.655758990Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:40:31.655824 containerd[1443]: time="2024-06-25T18:40:31.655786585Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:40:31.655824 containerd[1443]: time="2024-06-25T18:40:31.655801092Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:40:31.655871 containerd[1443]: time="2024-06-25T18:40:31.655831682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:40:31.655871 containerd[1443]: time="2024-06-25T18:40:31.655847433Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:40:31.655871 containerd[1443]: time="2024-06-25T18:40:31.655863515Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:40:31.655932 containerd[1443]: time="2024-06-25T18:40:31.655875919Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:40:31.656045 containerd[1443]: time="2024-06-25T18:40:31.656019355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:40:31.656073 containerd[1443]: time="2024-06-25T18:40:31.656044774Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:40:31.656093 containerd[1443]: time="2024-06-25T18:40:31.656058970Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:40:31.656114 containerd[1443]: time="2024-06-25T18:40:31.656101207Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:40:31.656134 containerd[1443]: time="2024-06-25T18:40:31.656117425Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:40:31.656155 containerd[1443]: time="2024-06-25T18:40:31.656135299Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:40:31.656182 containerd[1443]: time="2024-06-25T18:40:31.656167122Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:40:31.656202 containerd[1443]: time="2024-06-25T18:40:31.656183018Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:40:31.656202 containerd[1443]: time="2024-06-25T18:40:31.656198552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:40:31.656240 containerd[1443]: time="2024-06-25T18:40:31.656212208Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:40:31.656260 containerd[1443]: time="2024-06-25T18:40:31.656225358Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:40:31.656260 containerd[1443]: time="2024-06-25T18:40:31.656253544Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:40:31.656415 containerd[1443]: time="2024-06-25T18:40:31.656383530Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:40:31.656823 containerd[1443]: time="2024-06-25T18:40:31.656794149Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:40:31.656854 containerd[1443]: time="2024-06-25T18:40:31.656831039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.656854 containerd[1443]: time="2024-06-25T18:40:31.656846417Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:40:31.656907 containerd[1443]: time="2024-06-25T18:40:31.656868478Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:40:31.657028 containerd[1443]: time="2024-06-25T18:40:31.656952248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657028 containerd[1443]: time="2024-06-25T18:40:31.656971780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657072 containerd[1443]: time="2024-06-25T18:40:31.656983853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657072 containerd[1443]: time="2024-06-25T18:40:31.657064700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657109 containerd[1443]: time="2024-06-25T18:40:31.657078389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657109 containerd[1443]: time="2024-06-25T18:40:31.657091519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657152 containerd[1443]: time="2024-06-25T18:40:31.657103932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657152 containerd[1443]: time="2024-06-25T18:40:31.657132875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657152 containerd[1443]: time="2024-06-25T18:40:31.657147733Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:40:31.657365 containerd[1443]: time="2024-06-25T18:40:31.657341542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657394 containerd[1443]: time="2024-06-25T18:40:31.657379830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657415 containerd[1443]: time="2024-06-25T18:40:31.657395311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657415 containerd[1443]: time="2024-06-25T18:40:31.657408430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657467 containerd[1443]: time="2024-06-25T18:40:31.657421331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657467 containerd[1443]: time="2024-06-25T18:40:31.657451113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657504 containerd[1443]: time="2024-06-25T18:40:31.657471631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657504 containerd[1443]: time="2024-06-25T18:40:31.657484521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:40:31.657873 containerd[1443]: time="2024-06-25T18:40:31.657790087Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:40:31.657873 containerd[1443]: time="2024-06-25T18:40:31.657872789Z" level=info msg="Connect containerd service" Jun 25 18:40:31.658037 containerd[1443]: time="2024-06-25T18:40:31.657896145Z" level=info msg="using legacy CRI server" Jun 25 18:40:31.658037 containerd[1443]: time="2024-06-25T18:40:31.657903151Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:40:31.658037 containerd[1443]: time="2024-06-25T18:40:31.658009147Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:40:31.660313 containerd[1443]: time="2024-06-25T18:40:31.660261978Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:40:31.660352 containerd[1443]: time="2024-06-25T18:40:31.660322557Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:40:31.660386 containerd[1443]: time="2024-06-25T18:40:31.660341406Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:40:31.660386 containerd[1443]: time="2024-06-25T18:40:31.660375353Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:40:31.660430 containerd[1443]: time="2024-06-25T18:40:31.660388648Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:40:31.660481 containerd[1443]: time="2024-06-25T18:40:31.660449258Z" level=info msg="Start subscribing containerd event" Jun 25 18:40:31.660535 containerd[1443]: time="2024-06-25T18:40:31.660495318Z" level=info msg="Start recovering state" Jun 25 18:40:31.660610 containerd[1443]: time="2024-06-25T18:40:31.660590859Z" level=info msg="Start event monitor" Jun 25 18:40:31.660644 containerd[1443]: time="2024-06-25T18:40:31.660613490Z" level=info msg="Start snapshots syncer" Jun 25 18:40:31.660644 containerd[1443]: time="2024-06-25T18:40:31.660622847Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:40:31.660644 containerd[1443]: time="2024-06-25T18:40:31.660629956Z" level=info msg="Start streaming server" Jun 25 18:40:31.661083 containerd[1443]: time="2024-06-25T18:40:31.661054658Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:40:31.661141 containerd[1443]: time="2024-06-25T18:40:31.661121567Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:40:31.661277 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:40:31.662745 containerd[1443]: time="2024-06-25T18:40:31.661998960Z" level=info msg="containerd successfully booted in 0.041851s" Jun 25 18:40:31.831781 tar[1440]: linux-amd64/LICENSE Jun 25 18:40:31.831896 tar[1440]: linux-amd64/README.md Jun 25 18:40:31.845307 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:40:31.954115 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:40:31.977423 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:40:31.989789 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:40:31.992060 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:55522.service - OpenSSH per-connection server daemon (10.0.0.1:55522). Jun 25 18:40:32.000505 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:40:32.000796 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:40:32.012856 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:40:32.024402 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:40:32.027718 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:40:32.030124 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:40:32.031585 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:40:32.041787 sshd[1500]: Accepted publickey for core from 10.0.0.1 port 55522 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:32.043641 sshd[1500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:32.052241 systemd-logind[1428]: New session 1 of user core. Jun 25 18:40:32.053769 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:40:32.066781 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:40:32.077742 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:40:32.093799 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:40:32.097594 (systemd)[1511]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:32.197771 systemd[1511]: Queued start job for default target default.target. Jun 25 18:40:32.206905 systemd[1511]: Created slice app.slice - User Application Slice. Jun 25 18:40:32.206935 systemd[1511]: Reached target paths.target - Paths. Jun 25 18:40:32.206953 systemd[1511]: Reached target timers.target - Timers. Jun 25 18:40:32.208502 systemd[1511]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:40:32.220197 systemd[1511]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:40:32.220329 systemd[1511]: Reached target sockets.target - Sockets. Jun 25 18:40:32.220348 systemd[1511]: Reached target basic.target - Basic System. Jun 25 18:40:32.220385 systemd[1511]: Reached target default.target - Main User Target. Jun 25 18:40:32.220418 systemd[1511]: Startup finished in 116ms. Jun 25 18:40:32.220884 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:40:32.223601 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:40:32.287773 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:55524.service - OpenSSH per-connection server daemon (10.0.0.1:55524). Jun 25 18:40:32.340694 sshd[1522]: Accepted publickey for core from 10.0.0.1 port 55524 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:32.342221 sshd[1522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:32.346250 systemd-logind[1428]: New session 2 of user core. Jun 25 18:40:32.356686 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:40:32.414046 sshd[1522]: pam_unix(sshd:session): session closed for user core Jun 25 18:40:32.430754 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:55524.service: Deactivated successfully. Jun 25 18:40:32.432650 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:40:32.434199 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:40:32.435569 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:55528.service - OpenSSH per-connection server daemon (10.0.0.1:55528). Jun 25 18:40:32.438011 systemd-logind[1428]: Removed session 2. Jun 25 18:40:32.473791 sshd[1529]: Accepted publickey for core from 10.0.0.1 port 55528 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:32.475292 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:32.479424 systemd-logind[1428]: New session 3 of user core. Jun 25 18:40:32.488747 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:40:32.547047 sshd[1529]: pam_unix(sshd:session): session closed for user core Jun 25 18:40:32.550888 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:55528.service: Deactivated successfully. Jun 25 18:40:32.552666 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:40:32.553288 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:40:32.554174 systemd-logind[1428]: Removed session 3. Jun 25 18:40:32.939619 systemd-networkd[1368]: eth0: Gained IPv6LL Jun 25 18:40:32.943306 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:40:32.945214 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:40:32.955801 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:40:32.958346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:40:32.960926 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:40:32.983637 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:40:32.985263 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:40:32.985476 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:40:32.987894 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:40:33.600560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:40:33.612473 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:40:33.613388 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:40:33.614705 systemd[1]: Startup finished in 942ms (kernel) + 5.951s (initrd) + 4.976s (userspace) = 11.871s. Jun 25 18:40:34.116702 kubelet[1557]: E0625 18:40:34.116612 1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:40:34.121197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:40:34.121415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:40:34.121781 systemd[1]: kubelet.service: Consumed 1.020s CPU time. Jun 25 18:40:42.721606 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:39334.service - OpenSSH per-connection server daemon (10.0.0.1:39334). Jun 25 18:40:42.759455 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 39334 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:42.760932 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:42.765493 systemd-logind[1428]: New session 4 of user core. Jun 25 18:40:42.775748 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:40:42.833218 sshd[1571]: pam_unix(sshd:session): session closed for user core Jun 25 18:40:42.849441 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:39334.service: Deactivated successfully. Jun 25 18:40:42.851340 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:40:42.852997 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:40:42.854276 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:39344.service - OpenSSH per-connection server daemon (10.0.0.1:39344). Jun 25 18:40:42.855229 systemd-logind[1428]: Removed session 4. Jun 25 18:40:42.908191 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 39344 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:42.909672 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:42.913362 systemd-logind[1428]: New session 5 of user core. Jun 25 18:40:42.922712 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:40:42.974220 sshd[1578]: pam_unix(sshd:session): session closed for user core Jun 25 18:40:42.981435 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:39344.service: Deactivated successfully. Jun 25 18:40:42.983434 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:40:42.985082 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:40:42.995919 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:39350.service - OpenSSH per-connection server daemon (10.0.0.1:39350). Jun 25 18:40:42.996806 systemd-logind[1428]: Removed session 5. Jun 25 18:40:43.028855 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 39350 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:43.030541 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:43.034664 systemd-logind[1428]: New session 6 of user core. Jun 25 18:40:43.051773 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:40:43.106701 sshd[1585]: pam_unix(sshd:session): session closed for user core Jun 25 18:40:43.122324 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:39350.service: Deactivated successfully. Jun 25 18:40:43.123964 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:40:43.125295 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:40:43.126435 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:39366.service - OpenSSH per-connection server daemon (10.0.0.1:39366). Jun 25 18:40:43.127163 systemd-logind[1428]: Removed session 6. Jun 25 18:40:43.163649 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 39366 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:43.165024 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:43.168853 systemd-logind[1428]: New session 7 of user core. Jun 25 18:40:43.179632 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:40:43.331562 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:40:43.331870 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:40:43.355094 sudo[1596]: pam_unix(sudo:session): session closed for user root Jun 25 18:40:43.357459 sshd[1593]: pam_unix(sshd:session): session closed for user core Jun 25 18:40:43.373556 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:39366.service: Deactivated successfully. Jun 25 18:40:43.375306 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:40:43.376975 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:40:43.378439 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:39376.service - OpenSSH per-connection server daemon (10.0.0.1:39376). Jun 25 18:40:43.379116 systemd-logind[1428]: Removed session 7. Jun 25 18:40:43.417781 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 39376 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:43.419439 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:43.423414 systemd-logind[1428]: New session 8 of user core. Jun 25 18:40:43.432655 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:40:43.488913 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:40:43.489207 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:40:43.493160 sudo[1605]: pam_unix(sudo:session): session closed for user root Jun 25 18:40:43.500346 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:40:43.500723 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:40:43.519811 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:40:43.521375 auditctl[1608]: No rules Jun 25 18:40:43.521832 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:40:43.522107 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:40:43.525295 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:40:43.557223 augenrules[1626]: No rules Jun 25 18:40:43.559158 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:40:43.560722 sudo[1604]: pam_unix(sudo:session): session closed for user root Jun 25 18:40:43.562549 sshd[1601]: pam_unix(sshd:session): session closed for user core Jun 25 18:40:43.576932 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:39376.service: Deactivated successfully. Jun 25 18:40:43.578822 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:40:43.580367 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:40:43.581698 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:39384.service - OpenSSH per-connection server daemon (10.0.0.1:39384). Jun 25 18:40:43.582693 systemd-logind[1428]: Removed session 8. Jun 25 18:40:43.630386 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 39384 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:40:43.631830 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:40:43.635684 systemd-logind[1428]: New session 9 of user core. Jun 25 18:40:43.647630 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:40:43.700906 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:40:43.701191 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:40:43.811738 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:40:43.811957 (dockerd)[1648]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:40:44.052677 dockerd[1648]: time="2024-06-25T18:40:44.052533918Z" level=info msg="Starting up" Jun 25 18:40:44.108872 dockerd[1648]: time="2024-06-25T18:40:44.108835459Z" level=info msg="Loading containers: start." Jun 25 18:40:44.216548 kernel: Initializing XFRM netlink socket Jun 25 18:40:44.249107 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:40:44.257747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:40:44.296333 systemd-networkd[1368]: docker0: Link UP Jun 25 18:40:44.411558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:40:44.416148 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:40:44.562954 kubelet[1745]: E0625 18:40:44.562820 1745 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:40:44.570624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:40:44.570840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:40:44.637147 dockerd[1648]: time="2024-06-25T18:40:44.637105017Z" level=info msg="Loading containers: done." Jun 25 18:40:44.687916 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3806316975-merged.mount: Deactivated successfully. Jun 25 18:40:44.722035 dockerd[1648]: time="2024-06-25T18:40:44.721885003Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:40:44.722235 dockerd[1648]: time="2024-06-25T18:40:44.722202291Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:40:44.722409 dockerd[1648]: time="2024-06-25T18:40:44.722369830Z" level=info msg="Daemon has completed initialization" Jun 25 18:40:44.762745 dockerd[1648]: time="2024-06-25T18:40:44.762663910Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:40:44.762844 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:40:45.494065 containerd[1443]: time="2024-06-25T18:40:45.494022198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:40:46.197442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045518375.mount: Deactivated successfully. Jun 25 18:40:47.238048 containerd[1443]: time="2024-06-25T18:40:47.237982554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:47.238770 containerd[1443]: time="2024-06-25T18:40:47.238714382Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 18:40:47.241524 containerd[1443]: time="2024-06-25T18:40:47.240771207Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:47.243718 containerd[1443]: time="2024-06-25T18:40:47.243685170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:47.244798 containerd[1443]: time="2024-06-25T18:40:47.244733737Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 1.750669351s" Jun 25 18:40:47.244798 containerd[1443]: time="2024-06-25T18:40:47.244794304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 18:40:47.267239 containerd[1443]: time="2024-06-25T18:40:47.267198963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:40:49.145196 containerd[1443]: time="2024-06-25T18:40:49.145119643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:49.146299 containerd[1443]: time="2024-06-25T18:40:49.146251315Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 18:40:49.149078 containerd[1443]: time="2024-06-25T18:40:49.149045714Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:49.152263 containerd[1443]: time="2024-06-25T18:40:49.152207757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:49.153247 containerd[1443]: time="2024-06-25T18:40:49.153211847Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 1.885977185s" Jun 25 18:40:49.153306 containerd[1443]: time="2024-06-25T18:40:49.153247413Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 18:40:49.179309 containerd[1443]: time="2024-06-25T18:40:49.179249032Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:40:50.946369 containerd[1443]: time="2024-06-25T18:40:50.946289311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:50.947096 containerd[1443]: time="2024-06-25T18:40:50.947031617Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 18:40:50.948315 containerd[1443]: time="2024-06-25T18:40:50.948276142Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:50.951652 containerd[1443]: time="2024-06-25T18:40:50.951620904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:50.952701 containerd[1443]: time="2024-06-25T18:40:50.952664397Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.773372477s" Jun 25 18:40:50.952740 containerd[1443]: time="2024-06-25T18:40:50.952706008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 18:40:50.982855 containerd[1443]: time="2024-06-25T18:40:50.982815250Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:40:52.414931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817927348.mount: Deactivated successfully. Jun 25 18:40:52.920426 containerd[1443]: time="2024-06-25T18:40:52.920369719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:52.921109 containerd[1443]: time="2024-06-25T18:40:52.921042904Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 18:40:52.922279 containerd[1443]: time="2024-06-25T18:40:52.922246018Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:52.924745 containerd[1443]: time="2024-06-25T18:40:52.924686904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:52.974109 containerd[1443]: time="2024-06-25T18:40:52.974035584Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.991170496s" Jun 25 18:40:52.974109 containerd[1443]: time="2024-06-25T18:40:52.974103672Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 18:40:53.003733 containerd[1443]: time="2024-06-25T18:40:53.003686515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:40:53.622678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776627417.mount: Deactivated successfully. Jun 25 18:40:53.634167 containerd[1443]: time="2024-06-25T18:40:53.634074921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:53.634894 containerd[1443]: time="2024-06-25T18:40:53.634813654Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:40:53.636256 containerd[1443]: time="2024-06-25T18:40:53.636203627Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:53.638986 containerd[1443]: time="2024-06-25T18:40:53.638899011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:53.639843 containerd[1443]: time="2024-06-25T18:40:53.639773905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 636.041902ms" Jun 25 18:40:53.639843 containerd[1443]: time="2024-06-25T18:40:53.639827081Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:40:53.671785 containerd[1443]: time="2024-06-25T18:40:53.671358559Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:40:54.249713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466193819.mount: Deactivated successfully. Jun 25 18:40:54.821167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:40:54.830936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:40:54.997332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:40:55.002638 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:40:55.273137 kubelet[1932]: E0625 18:40:55.272955 1932 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:40:55.278875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:40:55.279237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:40:57.488220 containerd[1443]: time="2024-06-25T18:40:57.488110025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:57.488971 containerd[1443]: time="2024-06-25T18:40:57.488901681Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 18:40:57.490463 containerd[1443]: time="2024-06-25T18:40:57.490425135Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:57.495038 containerd[1443]: time="2024-06-25T18:40:57.494964948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:57.496193 containerd[1443]: time="2024-06-25T18:40:57.496150881Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.824710236s" Jun 25 18:40:57.496249 containerd[1443]: time="2024-06-25T18:40:57.496193516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:40:57.522763 containerd[1443]: time="2024-06-25T18:40:57.522711561Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:40:58.253920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955921634.mount: Deactivated successfully. Jun 25 18:40:59.275835 containerd[1443]: time="2024-06-25T18:40:59.275760842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:59.276860 containerd[1443]: time="2024-06-25T18:40:59.276779710Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 18:40:59.278344 containerd[1443]: time="2024-06-25T18:40:59.278304620Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:59.281240 containerd[1443]: time="2024-06-25T18:40:59.281181384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:40:59.284127 containerd[1443]: time="2024-06-25T18:40:59.283604949Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.760838421s" Jun 25 18:40:59.284127 containerd[1443]: time="2024-06-25T18:40:59.283659907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 18:41:02.092948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:41:02.102738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:41:02.122919 systemd[1]: Reloading requested from client PID 2068 ('systemctl') (unit session-9.scope)... Jun 25 18:41:02.122940 systemd[1]: Reloading... Jun 25 18:41:02.212584 zram_generator::config[2105]: No configuration found. Jun 25 18:41:02.470076 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:41:02.577677 systemd[1]: Reloading finished in 454 ms. Jun 25 18:41:02.630124 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:41:02.630219 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:41:02.630558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:41:02.633818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:41:02.804961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:41:02.810957 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:41:02.858650 kubelet[2154]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:41:02.858650 kubelet[2154]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:41:02.858650 kubelet[2154]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:41:02.859062 kubelet[2154]: I0625 18:41:02.858717 2154 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:41:03.492527 kubelet[2154]: I0625 18:41:03.492457 2154 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:41:03.492527 kubelet[2154]: I0625 18:41:03.492493 2154 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:41:03.492739 kubelet[2154]: I0625 18:41:03.492722 2154 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:41:03.506334 kubelet[2154]: I0625 18:41:03.506293 2154 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:41:03.506334 kubelet[2154]: E0625 18:41:03.506310 2154 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.533812 kubelet[2154]: I0625 18:41:03.533771 2154 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:41:03.536328 kubelet[2154]: I0625 18:41:03.536291 2154 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:41:03.536529 kubelet[2154]: I0625 18:41:03.536477 2154 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:41:03.536848 kubelet[2154]: I0625 18:41:03.536824 2154 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:41:03.536848 kubelet[2154]: I0625 18:41:03.536841 2154 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:41:03.541033 kubelet[2154]: I0625 18:41:03.540988 2154 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:41:03.555070 kubelet[2154]: I0625 18:41:03.554991 2154 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:41:03.555070 kubelet[2154]: I0625 18:41:03.555027 2154 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:41:03.555070 kubelet[2154]: I0625 18:41:03.555078 2154 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:41:03.555321 kubelet[2154]: I0625 18:41:03.555103 2154 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:41:03.557607 kubelet[2154]: W0625 18:41:03.557500 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.557607 kubelet[2154]: E0625 18:41:03.557600 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.558527 kubelet[2154]: W0625 18:41:03.558447 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.558591 kubelet[2154]: E0625 18:41:03.558543 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.566664 kubelet[2154]: I0625 18:41:03.566630 2154 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:41:03.571642 kubelet[2154]: W0625 18:41:03.571591 2154 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:41:03.572434 kubelet[2154]: I0625 18:41:03.572414 2154 server.go:1232] "Started kubelet" Jun 25 18:41:03.573302 kubelet[2154]: I0625 18:41:03.572600 2154 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:41:03.573302 kubelet[2154]: I0625 18:41:03.572647 2154 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:41:03.573302 kubelet[2154]: I0625 18:41:03.573137 2154 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:41:03.573302 kubelet[2154]: E0625 18:41:03.573293 2154 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:41:03.573746 kubelet[2154]: E0625 18:41:03.573316 2154 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:41:03.573746 kubelet[2154]: I0625 18:41:03.573605 2154 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:41:03.575103 kubelet[2154]: I0625 18:41:03.574390 2154 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:41:03.575103 kubelet[2154]: I0625 18:41:03.574449 2154 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:41:03.575103 kubelet[2154]: E0625 18:41:03.574962 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:03.575103 kubelet[2154]: I0625 18:41:03.574982 2154 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:41:03.575103 kubelet[2154]: I0625 18:41:03.575054 2154 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:41:03.576021 kubelet[2154]: W0625 18:41:03.575966 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.576021 kubelet[2154]: E0625 18:41:03.576018 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.578992 kubelet[2154]: E0625 18:41:03.578870 2154 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc5367ed0b943f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 41, 3, 572382783, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 41, 3, 572382783, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.103:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.103:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:41:03.584459 kubelet[2154]: E0625 18:41:03.584419 2154 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="200ms" Jun 25 18:41:03.593623 kubelet[2154]: I0625 18:41:03.593579 2154 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:41:03.595192 kubelet[2154]: I0625 18:41:03.595143 2154 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:41:03.595192 kubelet[2154]: I0625 18:41:03.595173 2154 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:41:03.595192 kubelet[2154]: I0625 18:41:03.595196 2154 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:41:03.595397 kubelet[2154]: E0625 18:41:03.595259 2154 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:41:03.602206 kubelet[2154]: W0625 18:41:03.602156 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.602206 kubelet[2154]: E0625 18:41:03.602201 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:03.613171 kubelet[2154]: I0625 18:41:03.613140 2154 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:41:03.613171 kubelet[2154]: I0625 18:41:03.613164 2154 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:41:03.613171 kubelet[2154]: I0625 18:41:03.613182 2154 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:41:03.676696 kubelet[2154]: I0625 18:41:03.676655 2154 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:41:03.677009 kubelet[2154]: E0625 18:41:03.676991 2154 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Jun 25 18:41:03.696278 kubelet[2154]: E0625 18:41:03.696210 2154 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:41:03.785259 kubelet[2154]: E0625 18:41:03.785135 2154 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="400ms" Jun 25 18:41:03.878860 kubelet[2154]: I0625 18:41:03.878812 2154 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:41:03.879298 kubelet[2154]: E0625 18:41:03.879143 2154 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Jun 25 18:41:03.897413 kubelet[2154]: E0625 18:41:03.897330 2154 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:41:04.186127 kubelet[2154]: E0625 18:41:04.185971 2154 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="800ms" Jun 25 18:41:04.281033 kubelet[2154]: I0625 18:41:04.280989 2154 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:41:04.281500 kubelet[2154]: E0625 18:41:04.281460 2154 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Jun 25 18:41:04.297662 kubelet[2154]: E0625 18:41:04.297577 2154 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:41:04.357925 kubelet[2154]: I0625 18:41:04.357869 2154 policy_none.go:49] "None policy: Start" Jun 25 18:41:04.358880 kubelet[2154]: I0625 18:41:04.358858 2154 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:41:04.358965 kubelet[2154]: I0625 18:41:04.358896 2154 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:41:04.368025 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:41:04.382125 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:41:04.386201 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:41:04.393850 kubelet[2154]: I0625 18:41:04.393798 2154 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:41:04.394235 kubelet[2154]: I0625 18:41:04.394216 2154 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:41:04.394889 kubelet[2154]: E0625 18:41:04.394868 2154 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:41:04.478231 kubelet[2154]: W0625 18:41:04.478063 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:04.478231 kubelet[2154]: E0625 18:41:04.478131 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:04.610981 kubelet[2154]: W0625 18:41:04.610888 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:04.610981 kubelet[2154]: E0625 18:41:04.610968 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:04.730259 kubelet[2154]: W0625 18:41:04.730082 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:04.730259 kubelet[2154]: E0625 18:41:04.730155 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:04.816228 kubelet[2154]: W0625 18:41:04.816155 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:04.816228 kubelet[2154]: E0625 18:41:04.816215 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:04.987468 kubelet[2154]: E0625 18:41:04.987327 2154 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="1.6s" Jun 25 18:41:05.083283 kubelet[2154]: I0625 18:41:05.083246 2154 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:41:05.083653 kubelet[2154]: E0625 18:41:05.083634 2154 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Jun 25 18:41:05.097940 kubelet[2154]: I0625 18:41:05.097881 2154 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:41:05.099358 kubelet[2154]: I0625 18:41:05.099334 2154 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:41:05.100268 kubelet[2154]: I0625 18:41:05.100249 2154 topology_manager.go:215] "Topology Admit Handler" podUID="526657c25c41ab1049673055ca11b72c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:41:05.107957 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jun 25 18:41:05.126836 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jun 25 18:41:05.137979 systemd[1]: Created slice kubepods-burstable-pod526657c25c41ab1049673055ca11b72c.slice - libcontainer container kubepods-burstable-pod526657c25c41ab1049673055ca11b72c.slice. Jun 25 18:41:05.183608 kubelet[2154]: I0625 18:41:05.183540 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:05.183608 kubelet[2154]: I0625 18:41:05.183600 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:05.183608 kubelet[2154]: I0625 18:41:05.183630 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:41:05.183850 kubelet[2154]: I0625 18:41:05.183651 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/526657c25c41ab1049673055ca11b72c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"526657c25c41ab1049673055ca11b72c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:41:05.183850 kubelet[2154]: I0625 18:41:05.183670 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/526657c25c41ab1049673055ca11b72c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"526657c25c41ab1049673055ca11b72c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:41:05.183850 kubelet[2154]: I0625 18:41:05.183689 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/526657c25c41ab1049673055ca11b72c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"526657c25c41ab1049673055ca11b72c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:41:05.183850 kubelet[2154]: I0625 18:41:05.183707 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:05.183850 kubelet[2154]: I0625 18:41:05.183784 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:05.184003 kubelet[2154]: I0625 18:41:05.183828 2154 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:05.425465 kubelet[2154]: E0625 18:41:05.425284 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:05.426207 containerd[1443]: time="2024-06-25T18:41:05.426147653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 18:41:05.436470 kubelet[2154]: E0625 18:41:05.436417 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:05.437035 containerd[1443]: time="2024-06-25T18:41:05.436978781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 18:41:05.441174 kubelet[2154]: E0625 18:41:05.441142 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:05.441520 containerd[1443]: time="2024-06-25T18:41:05.441472148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:526657c25c41ab1049673055ca11b72c,Namespace:kube-system,Attempt:0,}" Jun 25 18:41:05.608935 kubelet[2154]: E0625 18:41:05.608893 2154 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:06.008962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736998737.mount: Deactivated successfully. Jun 25 18:41:06.018476 containerd[1443]: time="2024-06-25T18:41:06.018369693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:41:06.019407 containerd[1443]: time="2024-06-25T18:41:06.019348608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:41:06.020468 containerd[1443]: time="2024-06-25T18:41:06.020416470Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:41:06.021525 containerd[1443]: time="2024-06-25T18:41:06.021458128Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:41:06.022323 containerd[1443]: time="2024-06-25T18:41:06.022274518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:41:06.023451 containerd[1443]: time="2024-06-25T18:41:06.023425843Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:41:06.024323 containerd[1443]: time="2024-06-25T18:41:06.024270561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:41:06.028146 containerd[1443]: time="2024-06-25T18:41:06.028076577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:41:06.030407 containerd[1443]: time="2024-06-25T18:41:06.030327702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 593.193798ms" Jun 25 18:41:06.031081 containerd[1443]: time="2024-06-25T18:41:06.031023176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.444998ms" Jun 25 18:41:06.031772 containerd[1443]: time="2024-06-25T18:41:06.031745778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 605.449468ms" Jun 25 18:41:06.228386 containerd[1443]: time="2024-06-25T18:41:06.228211074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:41:06.228386 containerd[1443]: time="2024-06-25T18:41:06.228311318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:06.229188 containerd[1443]: time="2024-06-25T18:41:06.228338695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:41:06.229188 containerd[1443]: time="2024-06-25T18:41:06.228356378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:06.229390 containerd[1443]: time="2024-06-25T18:41:06.228206101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:41:06.229453 containerd[1443]: time="2024-06-25T18:41:06.229412311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:06.229488 containerd[1443]: time="2024-06-25T18:41:06.229442624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:41:06.229488 containerd[1443]: time="2024-06-25T18:41:06.229456789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:06.234395 containerd[1443]: time="2024-06-25T18:41:06.233866536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:41:06.234395 containerd[1443]: time="2024-06-25T18:41:06.233956745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:06.234395 containerd[1443]: time="2024-06-25T18:41:06.233981857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:41:06.234395 containerd[1443]: time="2024-06-25T18:41:06.234038344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:06.294895 systemd[1]: Started cri-containerd-346b273f78d559efa060e343b75a16263b2320955faeb5f4fe06c93ebef611e8.scope - libcontainer container 346b273f78d559efa060e343b75a16263b2320955faeb5f4fe06c93ebef611e8. Jun 25 18:41:06.302321 systemd[1]: Started cri-containerd-4be7cbc918ff66947f6b11d8fa1cf24ccb5f21bdf6a2ed0e783934595ac21955.scope - libcontainer container 4be7cbc918ff66947f6b11d8fa1cf24ccb5f21bdf6a2ed0e783934595ac21955. Jun 25 18:41:06.306461 systemd[1]: Started cri-containerd-548a639387ca5400b9e7eaaf3d46af08341fbd60b19fc4c93cb9f5e87d7d45f2.scope - libcontainer container 548a639387ca5400b9e7eaaf3d46af08341fbd60b19fc4c93cb9f5e87d7d45f2. Jun 25 18:41:06.362626 containerd[1443]: time="2024-06-25T18:41:06.362469321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"346b273f78d559efa060e343b75a16263b2320955faeb5f4fe06c93ebef611e8\"" Jun 25 18:41:06.364599 kubelet[2154]: E0625 18:41:06.364457 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:06.367458 containerd[1443]: time="2024-06-25T18:41:06.367338957Z" level=info msg="CreateContainer within sandbox \"346b273f78d559efa060e343b75a16263b2320955faeb5f4fe06c93ebef611e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:41:06.375013 containerd[1443]: time="2024-06-25T18:41:06.374954354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"4be7cbc918ff66947f6b11d8fa1cf24ccb5f21bdf6a2ed0e783934595ac21955\"" Jun 25 18:41:06.375252 containerd[1443]: time="2024-06-25T18:41:06.375146904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:526657c25c41ab1049673055ca11b72c,Namespace:kube-system,Attempt:0,} returns sandbox id \"548a639387ca5400b9e7eaaf3d46af08341fbd60b19fc4c93cb9f5e87d7d45f2\"" Jun 25 18:41:06.375992 kubelet[2154]: E0625 18:41:06.375969 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:06.376050 kubelet[2154]: E0625 18:41:06.376038 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:06.378784 containerd[1443]: time="2024-06-25T18:41:06.378730457Z" level=info msg="CreateContainer within sandbox \"4be7cbc918ff66947f6b11d8fa1cf24ccb5f21bdf6a2ed0e783934595ac21955\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:41:06.378987 containerd[1443]: time="2024-06-25T18:41:06.378728842Z" level=info msg="CreateContainer within sandbox \"548a639387ca5400b9e7eaaf3d46af08341fbd60b19fc4c93cb9f5e87d7d45f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:41:06.393591 containerd[1443]: time="2024-06-25T18:41:06.393497201Z" level=info msg="CreateContainer within sandbox \"346b273f78d559efa060e343b75a16263b2320955faeb5f4fe06c93ebef611e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"85c9a92bf4724577ad078a762922063406683b1e5693f02c9752131c2e70da6d\"" Jun 25 18:41:06.394457 containerd[1443]: time="2024-06-25T18:41:06.394413914Z" level=info msg="StartContainer for \"85c9a92bf4724577ad078a762922063406683b1e5693f02c9752131c2e70da6d\"" Jun 25 18:41:06.403897 containerd[1443]: time="2024-06-25T18:41:06.403822091Z" level=info msg="CreateContainer within sandbox \"4be7cbc918ff66947f6b11d8fa1cf24ccb5f21bdf6a2ed0e783934595ac21955\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a1acd5873538aff28718536e1ecda30081a7dbb9c697ea5c217adf00cd38574f\"" Jun 25 18:41:06.404551 containerd[1443]: time="2024-06-25T18:41:06.404486841Z" level=info msg="StartContainer for \"a1acd5873538aff28718536e1ecda30081a7dbb9c697ea5c217adf00cd38574f\"" Jun 25 18:41:06.410160 containerd[1443]: time="2024-06-25T18:41:06.410027743Z" level=info msg="CreateContainer within sandbox \"548a639387ca5400b9e7eaaf3d46af08341fbd60b19fc4c93cb9f5e87d7d45f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bb8cd209d77a4c4b58f18f9b354829f30f11bda87c07448d8688ac54eb94e26\"" Jun 25 18:41:06.410762 containerd[1443]: time="2024-06-25T18:41:06.410704031Z" level=info msg="StartContainer for \"8bb8cd209d77a4c4b58f18f9b354829f30f11bda87c07448d8688ac54eb94e26\"" Jun 25 18:41:06.429765 systemd[1]: Started cri-containerd-85c9a92bf4724577ad078a762922063406683b1e5693f02c9752131c2e70da6d.scope - libcontainer container 85c9a92bf4724577ad078a762922063406683b1e5693f02c9752131c2e70da6d. Jun 25 18:41:06.445806 systemd[1]: Started cri-containerd-a1acd5873538aff28718536e1ecda30081a7dbb9c697ea5c217adf00cd38574f.scope - libcontainer container a1acd5873538aff28718536e1ecda30081a7dbb9c697ea5c217adf00cd38574f. Jun 25 18:41:06.448164 kubelet[2154]: W0625 18:41:06.448093 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:06.448164 kubelet[2154]: E0625 18:41:06.448135 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:06.450350 systemd[1]: Started cri-containerd-8bb8cd209d77a4c4b58f18f9b354829f30f11bda87c07448d8688ac54eb94e26.scope - libcontainer container 8bb8cd209d77a4c4b58f18f9b354829f30f11bda87c07448d8688ac54eb94e26. Jun 25 18:41:06.492420 containerd[1443]: time="2024-06-25T18:41:06.492259710Z" level=info msg="StartContainer for \"85c9a92bf4724577ad078a762922063406683b1e5693f02c9752131c2e70da6d\" returns successfully" Jun 25 18:41:06.510535 kubelet[2154]: W0625 18:41:06.509791 2154 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:06.510535 kubelet[2154]: E0625 18:41:06.509897 2154 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Jun 25 18:41:06.522638 containerd[1443]: time="2024-06-25T18:41:06.522458269Z" level=info msg="StartContainer for \"8bb8cd209d77a4c4b58f18f9b354829f30f11bda87c07448d8688ac54eb94e26\" returns successfully" Jun 25 18:41:06.529841 containerd[1443]: time="2024-06-25T18:41:06.529762037Z" level=info msg="StartContainer for \"a1acd5873538aff28718536e1ecda30081a7dbb9c697ea5c217adf00cd38574f\" returns successfully" Jun 25 18:41:06.588486 kubelet[2154]: E0625 18:41:06.588306 2154 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="3.2s" Jun 25 18:41:06.613588 kubelet[2154]: E0625 18:41:06.613373 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:06.618550 kubelet[2154]: E0625 18:41:06.618487 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:06.619289 kubelet[2154]: E0625 18:41:06.619187 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:06.686382 kubelet[2154]: I0625 18:41:06.685735 2154 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:41:07.621440 kubelet[2154]: E0625 18:41:07.621389 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:07.622141 kubelet[2154]: E0625 18:41:07.621862 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:08.355491 kubelet[2154]: I0625 18:41:08.355414 2154 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 18:41:08.388592 kubelet[2154]: E0625 18:41:08.388344 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:08.489351 kubelet[2154]: E0625 18:41:08.489298 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:08.589682 kubelet[2154]: E0625 18:41:08.589607 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:08.690348 kubelet[2154]: E0625 18:41:08.690182 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:08.790984 kubelet[2154]: E0625 18:41:08.790923 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:08.891665 kubelet[2154]: E0625 18:41:08.891611 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:08.992399 kubelet[2154]: E0625 18:41:08.992261 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.093376 kubelet[2154]: E0625 18:41:09.093319 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.193942 kubelet[2154]: E0625 18:41:09.193883 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.294546 kubelet[2154]: E0625 18:41:09.294401 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.394949 kubelet[2154]: E0625 18:41:09.394894 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.439978 kubelet[2154]: E0625 18:41:09.439948 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:09.496062 kubelet[2154]: E0625 18:41:09.496026 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.597130 kubelet[2154]: E0625 18:41:09.596983 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.698043 kubelet[2154]: E0625 18:41:09.697995 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.798651 kubelet[2154]: E0625 18:41:09.798608 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:09.899113 kubelet[2154]: E0625 18:41:09.898987 2154 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:41:10.562659 kubelet[2154]: I0625 18:41:10.562592 2154 apiserver.go:52] "Watching apiserver" Jun 25 18:41:10.575936 kubelet[2154]: I0625 18:41:10.575907 2154 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:41:11.335051 systemd[1]: Reloading requested from client PID 2430 ('systemctl') (unit session-9.scope)... Jun 25 18:41:11.335064 systemd[1]: Reloading... Jun 25 18:41:11.416557 zram_generator::config[2468]: No configuration found. Jun 25 18:41:11.556285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:41:11.684446 systemd[1]: Reloading finished in 349 ms. Jun 25 18:41:11.742608 kubelet[2154]: I0625 18:41:11.742521 2154 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:41:11.742700 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:41:11.760032 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:41:11.760424 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:41:11.760487 systemd[1]: kubelet.service: Consumed 1.292s CPU time, 114.5M memory peak, 0B memory swap peak. Jun 25 18:41:11.772013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:41:11.942342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:41:11.948582 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:41:12.000689 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:41:12.000689 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:41:12.000689 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:41:12.001055 kubelet[2512]: I0625 18:41:12.000744 2512 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:41:12.007398 kubelet[2512]: I0625 18:41:12.007355 2512 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:41:12.007502 kubelet[2512]: I0625 18:41:12.007404 2512 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:41:12.007711 kubelet[2512]: I0625 18:41:12.007681 2512 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:41:12.009456 kubelet[2512]: I0625 18:41:12.009421 2512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:41:12.011214 kubelet[2512]: I0625 18:41:12.011186 2512 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:41:12.020468 kubelet[2512]: I0625 18:41:12.020397 2512 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:41:12.020776 kubelet[2512]: I0625 18:41:12.020751 2512 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:41:12.020968 kubelet[2512]: I0625 18:41:12.020940 2512 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:41:12.020968 kubelet[2512]: I0625 18:41:12.020965 2512 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:41:12.021072 kubelet[2512]: I0625 18:41:12.020974 2512 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:41:12.021072 kubelet[2512]: I0625 18:41:12.021013 2512 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:41:12.021126 kubelet[2512]: I0625 18:41:12.021107 2512 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:41:12.021126 kubelet[2512]: I0625 18:41:12.021120 2512 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:41:12.021170 kubelet[2512]: I0625 18:41:12.021146 2512 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:41:12.021170 kubelet[2512]: I0625 18:41:12.021164 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:41:12.021170 sudo[2528]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 18:41:12.021472 sudo[2528]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 18:41:12.022574 kubelet[2512]: I0625 18:41:12.022107 2512 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:41:12.022685 kubelet[2512]: I0625 18:41:12.022660 2512 server.go:1232] "Started kubelet" Jun 25 18:41:12.023249 kubelet[2512]: I0625 18:41:12.023007 2512 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:41:12.024845 kubelet[2512]: I0625 18:41:12.023981 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:41:12.024845 kubelet[2512]: I0625 18:41:12.024086 2512 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:41:12.025066 kubelet[2512]: I0625 18:41:12.025034 2512 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:41:12.025273 kubelet[2512]: I0625 18:41:12.025245 2512 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:41:12.030386 kubelet[2512]: I0625 18:41:12.030356 2512 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:41:12.031044 kubelet[2512]: I0625 18:41:12.030491 2512 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:41:12.031044 kubelet[2512]: I0625 18:41:12.030648 2512 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:41:12.031253 kubelet[2512]: E0625 18:41:12.031232 2512 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:41:12.031304 kubelet[2512]: E0625 18:41:12.031263 2512 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:41:12.046190 kubelet[2512]: I0625 18:41:12.046017 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:41:12.050139 kubelet[2512]: I0625 18:41:12.050103 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:41:12.050139 kubelet[2512]: I0625 18:41:12.050139 2512 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:41:12.050290 kubelet[2512]: I0625 18:41:12.050177 2512 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:41:12.050290 kubelet[2512]: E0625 18:41:12.050278 2512 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:41:12.103015 kubelet[2512]: I0625 18:41:12.102932 2512 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:41:12.103015 kubelet[2512]: I0625 18:41:12.102954 2512 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:41:12.103015 kubelet[2512]: I0625 18:41:12.102972 2512 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:41:12.103201 kubelet[2512]: I0625 18:41:12.103119 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:41:12.103201 kubelet[2512]: I0625 18:41:12.103138 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:41:12.103201 kubelet[2512]: I0625 18:41:12.103145 2512 policy_none.go:49] "None policy: Start" Jun 25 18:41:12.103878 kubelet[2512]: I0625 18:41:12.103861 2512 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:41:12.103953 kubelet[2512]: I0625 18:41:12.103882 2512 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:41:12.104031 kubelet[2512]: I0625 18:41:12.104016 2512 state_mem.go:75] "Updated machine memory state" Jun 25 18:41:12.108644 kubelet[2512]: I0625 18:41:12.108618 2512 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:41:12.109165 kubelet[2512]: I0625 18:41:12.108902 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:41:12.134905 kubelet[2512]: I0625 18:41:12.134861 2512 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 18:41:12.151306 kubelet[2512]: I0625 18:41:12.150751 2512 topology_manager.go:215] "Topology Admit Handler" podUID="526657c25c41ab1049673055ca11b72c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:41:12.151306 kubelet[2512]: I0625 18:41:12.150922 2512 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 18:41:12.151306 kubelet[2512]: I0625 18:41:12.150942 2512 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:41:12.151306 kubelet[2512]: I0625 18:41:12.150996 2512 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:41:12.151306 kubelet[2512]: I0625 18:41:12.151014 2512 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 18:41:12.232130 kubelet[2512]: I0625 18:41:12.231979 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:41:12.232130 kubelet[2512]: I0625 18:41:12.232038 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/526657c25c41ab1049673055ca11b72c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"526657c25c41ab1049673055ca11b72c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:41:12.232130 kubelet[2512]: I0625 18:41:12.232066 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:12.232130 kubelet[2512]: I0625 18:41:12.232088 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:12.232130 kubelet[2512]: I0625 18:41:12.232118 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:12.232666 kubelet[2512]: I0625 18:41:12.232144 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/526657c25c41ab1049673055ca11b72c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"526657c25c41ab1049673055ca11b72c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:41:12.232666 kubelet[2512]: I0625 18:41:12.232186 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/526657c25c41ab1049673055ca11b72c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"526657c25c41ab1049673055ca11b72c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:41:12.232666 kubelet[2512]: I0625 18:41:12.232238 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:12.232666 kubelet[2512]: I0625 18:41:12.232275 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:41:12.464912 kubelet[2512]: E0625 18:41:12.464853 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:12.465057 kubelet[2512]: E0625 18:41:12.464951 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:12.465363 kubelet[2512]: E0625 18:41:12.465340 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:12.641129 sudo[2528]: pam_unix(sudo:session): session closed for user root Jun 25 18:41:13.024623 kubelet[2512]: I0625 18:41:13.021878 2512 apiserver.go:52] "Watching apiserver" Jun 25 18:41:13.030982 kubelet[2512]: I0625 18:41:13.030927 2512 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:41:13.069648 kubelet[2512]: E0625 18:41:13.068961 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:13.069648 kubelet[2512]: E0625 18:41:13.069585 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:13.080303 kubelet[2512]: E0625 18:41:13.078536 2512 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:41:13.080303 kubelet[2512]: E0625 18:41:13.080292 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:13.087856 kubelet[2512]: I0625 18:41:13.087778 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.087730689 podCreationTimestamp="2024-06-25 18:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:41:13.087539357 +0000 UTC m=+1.132441632" watchObservedRunningTime="2024-06-25 18:41:13.087730689 +0000 UTC m=+1.132632964" Jun 25 18:41:13.102050 kubelet[2512]: I0625 18:41:13.101773 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.101721422 podCreationTimestamp="2024-06-25 18:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:41:13.094960305 +0000 UTC m=+1.139862580" watchObservedRunningTime="2024-06-25 18:41:13.101721422 +0000 UTC m=+1.146623707" Jun 25 18:41:13.111172 kubelet[2512]: I0625 18:41:13.111127 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.111078451 podCreationTimestamp="2024-06-25 18:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:41:13.101898923 +0000 UTC m=+1.146801208" watchObservedRunningTime="2024-06-25 18:41:13.111078451 +0000 UTC m=+1.155980726" Jun 25 18:41:14.070649 kubelet[2512]: E0625 18:41:14.070617 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:14.265849 sudo[1637]: pam_unix(sudo:session): session closed for user root Jun 25 18:41:14.270832 sshd[1634]: pam_unix(sshd:session): session closed for user core Jun 25 18:41:14.280205 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:39384.service: Deactivated successfully. Jun 25 18:41:14.282696 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:41:14.283050 systemd[1]: session-9.scope: Consumed 5.532s CPU time, 140.7M memory peak, 0B memory swap peak. Jun 25 18:41:14.283612 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:41:14.284496 systemd-logind[1428]: Removed session 9. Jun 25 18:41:16.276769 kubelet[2512]: E0625 18:41:16.276727 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:16.563385 kubelet[2512]: E0625 18:41:16.563197 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:16.746624 update_engine[1431]: I0625 18:41:16.746551 1431 update_attempter.cc:509] Updating boot flags... Jun 25 18:41:17.075064 kubelet[2512]: E0625 18:41:17.075027 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:17.075220 kubelet[2512]: E0625 18:41:17.075135 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:17.520693 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2595) Jun 25 18:41:17.579802 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2595) Jun 25 18:41:17.605555 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2595) Jun 25 18:41:17.782850 kubelet[2512]: E0625 18:41:17.782797 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:18.076385 kubelet[2512]: E0625 18:41:18.076275 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:25.435601 kubelet[2512]: I0625 18:41:25.435547 2512 topology_manager.go:215] "Topology Admit Handler" podUID="4455856f-b150-42dc-bbd6-cd10e9004b61" podNamespace="kube-system" podName="kube-proxy-nv7lr" Jun 25 18:41:25.441827 kubelet[2512]: I0625 18:41:25.441782 2512 topology_manager.go:215] "Topology Admit Handler" podUID="4e488bb2-53dd-47c1-b0d1-f8f946f7269e" podNamespace="kube-system" podName="cilium-6vxwj" Jun 25 18:41:25.453624 systemd[1]: Created slice kubepods-besteffort-pod4455856f_b150_42dc_bbd6_cd10e9004b61.slice - libcontainer container kubepods-besteffort-pod4455856f_b150_42dc_bbd6_cd10e9004b61.slice. Jun 25 18:41:25.471137 systemd[1]: Created slice kubepods-burstable-pod4e488bb2_53dd_47c1_b0d1_f8f946f7269e.slice - libcontainer container kubepods-burstable-pod4e488bb2_53dd_47c1_b0d1_f8f946f7269e.slice. Jun 25 18:41:25.622139 kubelet[2512]: I0625 18:41:25.622083 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-etc-cni-netd\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.622139 kubelet[2512]: I0625 18:41:25.622128 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-config-path\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.622139 kubelet[2512]: I0625 18:41:25.622149 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-host-proc-sys-net\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.622368 kubelet[2512]: I0625 18:41:25.622167 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-hubble-tls\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.622368 kubelet[2512]: I0625 18:41:25.622200 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rttfn\" (UniqueName: \"kubernetes.io/projected/4455856f-b150-42dc-bbd6-cd10e9004b61-kube-api-access-rttfn\") pod \"kube-proxy-nv7lr\" (UID: \"4455856f-b150-42dc-bbd6-cd10e9004b61\") " pod="kube-system/kube-proxy-nv7lr" Jun 25 18:41:25.622368 kubelet[2512]: I0625 18:41:25.622220 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-hostproc\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.623732 kubelet[2512]: I0625 18:41:25.623699 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-lib-modules\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.623796 kubelet[2512]: I0625 18:41:25.623767 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4455856f-b150-42dc-bbd6-cd10e9004b61-xtables-lock\") pod \"kube-proxy-nv7lr\" (UID: \"4455856f-b150-42dc-bbd6-cd10e9004b61\") " pod="kube-system/kube-proxy-nv7lr" Jun 25 18:41:25.623796 kubelet[2512]: I0625 18:41:25.623794 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvx2s\" (UniqueName: \"kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-kube-api-access-jvx2s\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.623848 kubelet[2512]: I0625 18:41:25.623819 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-xtables-lock\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.623870 kubelet[2512]: I0625 18:41:25.623850 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4455856f-b150-42dc-bbd6-cd10e9004b61-kube-proxy\") pod \"kube-proxy-nv7lr\" (UID: \"4455856f-b150-42dc-bbd6-cd10e9004b61\") " pod="kube-system/kube-proxy-nv7lr" Jun 25 18:41:25.623941 kubelet[2512]: I0625 18:41:25.623920 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cni-path\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.623972 kubelet[2512]: I0625 18:41:25.623963 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-host-proc-sys-kernel\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.624003 kubelet[2512]: I0625 18:41:25.623984 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-run\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.624003 kubelet[2512]: I0625 18:41:25.624002 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-cgroup\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.624065 kubelet[2512]: I0625 18:41:25.624022 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4455856f-b150-42dc-bbd6-cd10e9004b61-lib-modules\") pod \"kube-proxy-nv7lr\" (UID: \"4455856f-b150-42dc-bbd6-cd10e9004b61\") " pod="kube-system/kube-proxy-nv7lr" Jun 25 18:41:25.624065 kubelet[2512]: I0625 18:41:25.624040 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-bpf-maps\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.624065 kubelet[2512]: I0625 18:41:25.624058 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-clustermesh-secrets\") pod \"cilium-6vxwj\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " pod="kube-system/cilium-6vxwj" Jun 25 18:41:25.634550 kubelet[2512]: I0625 18:41:25.633699 2512 topology_manager.go:215] "Topology Admit Handler" podUID="13770004-3c48-4ed4-9d3c-2bfd60173d34" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-rvfsg" Jun 25 18:41:25.642711 systemd[1]: Created slice kubepods-besteffort-pod13770004_3c48_4ed4_9d3c_2bfd60173d34.slice - libcontainer container kubepods-besteffort-pod13770004_3c48_4ed4_9d3c_2bfd60173d34.slice. Jun 25 18:41:25.644030 kubelet[2512]: I0625 18:41:25.643981 2512 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:41:25.644377 containerd[1443]: time="2024-06-25T18:41:25.644331340Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:41:25.644825 kubelet[2512]: I0625 18:41:25.644489 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:41:25.825158 kubelet[2512]: I0625 18:41:25.825020 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13770004-3c48-4ed4-9d3c-2bfd60173d34-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-rvfsg\" (UID: \"13770004-3c48-4ed4-9d3c-2bfd60173d34\") " pod="kube-system/cilium-operator-6bc8ccdb58-rvfsg" Jun 25 18:41:25.825158 kubelet[2512]: I0625 18:41:25.825063 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxn7z\" (UniqueName: \"kubernetes.io/projected/13770004-3c48-4ed4-9d3c-2bfd60173d34-kube-api-access-vxn7z\") pod \"cilium-operator-6bc8ccdb58-rvfsg\" (UID: \"13770004-3c48-4ed4-9d3c-2bfd60173d34\") " pod="kube-system/cilium-operator-6bc8ccdb58-rvfsg" Jun 25 18:41:25.981837 kubelet[2512]: E0625 18:41:25.981797 2512 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 18:41:25.981837 kubelet[2512]: E0625 18:41:25.981828 2512 projected.go:198] Error preparing data for projected volume kube-api-access-vxn7z for pod kube-system/cilium-operator-6bc8ccdb58-rvfsg: configmap "kube-root-ca.crt" not found Jun 25 18:41:25.982016 kubelet[2512]: E0625 18:41:25.981904 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13770004-3c48-4ed4-9d3c-2bfd60173d34-kube-api-access-vxn7z podName:13770004-3c48-4ed4-9d3c-2bfd60173d34 nodeName:}" failed. No retries permitted until 2024-06-25 18:41:26.481873002 +0000 UTC m=+14.526775277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vxn7z" (UniqueName: "kubernetes.io/projected/13770004-3c48-4ed4-9d3c-2bfd60173d34-kube-api-access-vxn7z") pod "cilium-operator-6bc8ccdb58-rvfsg" (UID: "13770004-3c48-4ed4-9d3c-2bfd60173d34") : configmap "kube-root-ca.crt" not found Jun 25 18:41:25.983548 kubelet[2512]: E0625 18:41:25.983436 2512 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 18:41:25.983548 kubelet[2512]: E0625 18:41:25.983457 2512 projected.go:198] Error preparing data for projected volume kube-api-access-jvx2s for pod kube-system/cilium-6vxwj: configmap "kube-root-ca.crt" not found Jun 25 18:41:25.983548 kubelet[2512]: E0625 18:41:25.983526 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-kube-api-access-jvx2s podName:4e488bb2-53dd-47c1-b0d1-f8f946f7269e nodeName:}" failed. No retries permitted until 2024-06-25 18:41:26.483493117 +0000 UTC m=+14.528395392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jvx2s" (UniqueName: "kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-kube-api-access-jvx2s") pod "cilium-6vxwj" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e") : configmap "kube-root-ca.crt" not found Jun 25 18:41:25.984306 kubelet[2512]: E0625 18:41:25.984169 2512 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 18:41:25.984306 kubelet[2512]: E0625 18:41:25.984197 2512 projected.go:198] Error preparing data for projected volume kube-api-access-rttfn for pod kube-system/kube-proxy-nv7lr: configmap "kube-root-ca.crt" not found Jun 25 18:41:25.984306 kubelet[2512]: E0625 18:41:25.984279 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4455856f-b150-42dc-bbd6-cd10e9004b61-kube-api-access-rttfn podName:4455856f-b150-42dc-bbd6-cd10e9004b61 nodeName:}" failed. No retries permitted until 2024-06-25 18:41:26.484247343 +0000 UTC m=+14.529149698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rttfn" (UniqueName: "kubernetes.io/projected/4455856f-b150-42dc-bbd6-cd10e9004b61-kube-api-access-rttfn") pod "kube-proxy-nv7lr" (UID: "4455856f-b150-42dc-bbd6-cd10e9004b61") : configmap "kube-root-ca.crt" not found Jun 25 18:41:26.545651 kubelet[2512]: E0625 18:41:26.545608 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:26.546285 containerd[1443]: time="2024-06-25T18:41:26.546221821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-rvfsg,Uid:13770004-3c48-4ed4-9d3c-2bfd60173d34,Namespace:kube-system,Attempt:0,}" Jun 25 18:41:26.576213 containerd[1443]: time="2024-06-25T18:41:26.575557960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:41:26.576392 containerd[1443]: time="2024-06-25T18:41:26.576226340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:26.576392 containerd[1443]: time="2024-06-25T18:41:26.576256123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:41:26.576392 containerd[1443]: time="2024-06-25T18:41:26.576278519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:26.604665 systemd[1]: Started cri-containerd-54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350.scope - libcontainer container 54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350. Jun 25 18:41:26.641586 containerd[1443]: time="2024-06-25T18:41:26.641542920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-rvfsg,Uid:13770004-3c48-4ed4-9d3c-2bfd60173d34,Namespace:kube-system,Attempt:0,} returns sandbox id \"54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350\"" Jun 25 18:41:26.642315 kubelet[2512]: E0625 18:41:26.642296 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:26.643137 containerd[1443]: time="2024-06-25T18:41:26.643096051Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 18:41:26.666960 kubelet[2512]: E0625 18:41:26.666928 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:26.667798 containerd[1443]: time="2024-06-25T18:41:26.667345328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nv7lr,Uid:4455856f-b150-42dc-bbd6-cd10e9004b61,Namespace:kube-system,Attempt:0,}" Jun 25 18:41:26.680120 kubelet[2512]: E0625 18:41:26.680066 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:26.680587 containerd[1443]: time="2024-06-25T18:41:26.680539606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vxwj,Uid:4e488bb2-53dd-47c1-b0d1-f8f946f7269e,Namespace:kube-system,Attempt:0,}" Jun 25 18:41:26.693147 containerd[1443]: time="2024-06-25T18:41:26.693019368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:41:26.693729 containerd[1443]: time="2024-06-25T18:41:26.693598332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:26.693729 containerd[1443]: time="2024-06-25T18:41:26.693635560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:41:26.693729 containerd[1443]: time="2024-06-25T18:41:26.693654429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:26.706328 containerd[1443]: time="2024-06-25T18:41:26.706240252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:41:26.706328 containerd[1443]: time="2024-06-25T18:41:26.706294815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:26.706328 containerd[1443]: time="2024-06-25T18:41:26.706308684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:41:26.706328 containerd[1443]: time="2024-06-25T18:41:26.706320870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:26.717702 systemd[1]: Started cri-containerd-44982b31d7e6359c1d0e3d8d3e8ac0714dfcb39d299eadef4714bc4d012cf56f.scope - libcontainer container 44982b31d7e6359c1d0e3d8d3e8ac0714dfcb39d299eadef4714bc4d012cf56f. Jun 25 18:41:26.721268 systemd[1]: Started cri-containerd-35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89.scope - libcontainer container 35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89. Jun 25 18:41:26.743906 containerd[1443]: time="2024-06-25T18:41:26.743858100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nv7lr,Uid:4455856f-b150-42dc-bbd6-cd10e9004b61,Namespace:kube-system,Attempt:0,} returns sandbox id \"44982b31d7e6359c1d0e3d8d3e8ac0714dfcb39d299eadef4714bc4d012cf56f\"" Jun 25 18:41:26.745154 kubelet[2512]: E0625 18:41:26.745132 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:26.748828 containerd[1443]: time="2024-06-25T18:41:26.748635879Z" level=info msg="CreateContainer within sandbox \"44982b31d7e6359c1d0e3d8d3e8ac0714dfcb39d299eadef4714bc4d012cf56f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:41:26.758183 containerd[1443]: time="2024-06-25T18:41:26.758144380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vxwj,Uid:4e488bb2-53dd-47c1-b0d1-f8f946f7269e,Namespace:kube-system,Attempt:0,} returns sandbox id \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\"" Jun 25 18:41:26.759221 kubelet[2512]: E0625 18:41:26.759193 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:26.775176 containerd[1443]: time="2024-06-25T18:41:26.775123371Z" level=info msg="CreateContainer within sandbox \"44982b31d7e6359c1d0e3d8d3e8ac0714dfcb39d299eadef4714bc4d012cf56f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"824a473f24b42da8f01469c968fab654bd6f42e2da9ac937f0ff4e892637c212\"" Jun 25 18:41:26.775906 containerd[1443]: time="2024-06-25T18:41:26.775825092Z" level=info msg="StartContainer for \"824a473f24b42da8f01469c968fab654bd6f42e2da9ac937f0ff4e892637c212\"" Jun 25 18:41:26.805740 systemd[1]: Started cri-containerd-824a473f24b42da8f01469c968fab654bd6f42e2da9ac937f0ff4e892637c212.scope - libcontainer container 824a473f24b42da8f01469c968fab654bd6f42e2da9ac937f0ff4e892637c212. Jun 25 18:41:26.840306 containerd[1443]: time="2024-06-25T18:41:26.840257191Z" level=info msg="StartContainer for \"824a473f24b42da8f01469c968fab654bd6f42e2da9ac937f0ff4e892637c212\" returns successfully" Jun 25 18:41:27.091688 kubelet[2512]: E0625 18:41:27.091486 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:27.828249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150749437.mount: Deactivated successfully. Jun 25 18:41:29.360529 containerd[1443]: time="2024-06-25T18:41:29.360442776Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:41:29.361443 containerd[1443]: time="2024-06-25T18:41:29.361335802Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907185" Jun 25 18:41:29.370118 containerd[1443]: time="2024-06-25T18:41:29.370074875Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:41:29.371559 containerd[1443]: time="2024-06-25T18:41:29.371523644Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.728381608s" Jun 25 18:41:29.371559 containerd[1443]: time="2024-06-25T18:41:29.371555870Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 25 18:41:29.378485 containerd[1443]: time="2024-06-25T18:41:29.378459477Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 18:41:29.379755 containerd[1443]: time="2024-06-25T18:41:29.379710601Z" level=info msg="CreateContainer within sandbox \"54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 18:41:29.715134 containerd[1443]: time="2024-06-25T18:41:29.715067776Z" level=info msg="CreateContainer within sandbox \"54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\"" Jun 25 18:41:29.715735 containerd[1443]: time="2024-06-25T18:41:29.715632858Z" level=info msg="StartContainer for \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\"" Jun 25 18:41:29.739697 systemd[1]: run-containerd-runc-k8s.io-62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f-runc.CiGpgf.mount: Deactivated successfully. Jun 25 18:41:29.747634 systemd[1]: Started cri-containerd-62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f.scope - libcontainer container 62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f. Jun 25 18:41:29.771645 containerd[1443]: time="2024-06-25T18:41:29.771603121Z" level=info msg="StartContainer for \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\" returns successfully" Jun 25 18:41:30.098262 kubelet[2512]: E0625 18:41:30.097947 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:30.226051 kubelet[2512]: I0625 18:41:30.225992 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-rvfsg" podStartSLOduration=2.452572007 podCreationTimestamp="2024-06-25 18:41:25 +0000 UTC" firstStartedPulling="2024-06-25 18:41:26.64273517 +0000 UTC m=+14.687637435" lastFinishedPulling="2024-06-25 18:41:29.378252282 +0000 UTC m=+17.423154557" observedRunningTime="2024-06-25 18:41:30.187625078 +0000 UTC m=+18.232527363" watchObservedRunningTime="2024-06-25 18:41:30.188089129 +0000 UTC m=+18.232991424" Jun 25 18:41:30.226276 kubelet[2512]: I0625 18:41:30.226260 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nv7lr" podStartSLOduration=5.22621514 podCreationTimestamp="2024-06-25 18:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:41:27.09846898 +0000 UTC m=+15.143371255" watchObservedRunningTime="2024-06-25 18:41:30.22621514 +0000 UTC m=+18.271117415" Jun 25 18:41:31.099231 kubelet[2512]: E0625 18:41:31.099169 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:36.131180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564719045.mount: Deactivated successfully. Jun 25 18:41:38.950218 containerd[1443]: time="2024-06-25T18:41:38.950157056Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:41:38.952430 containerd[1443]: time="2024-06-25T18:41:38.952355128Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735363" Jun 25 18:41:38.953357 containerd[1443]: time="2024-06-25T18:41:38.953096487Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:41:38.955066 containerd[1443]: time="2024-06-25T18:41:38.955028184Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.576530398s" Jun 25 18:41:38.955134 containerd[1443]: time="2024-06-25T18:41:38.955073185Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 25 18:41:38.958879 containerd[1443]: time="2024-06-25T18:41:38.958841018Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:41:38.970834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380664239.mount: Deactivated successfully. Jun 25 18:41:38.972860 containerd[1443]: time="2024-06-25T18:41:38.972802935Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\"" Jun 25 18:41:38.973499 containerd[1443]: time="2024-06-25T18:41:38.973339342Z" level=info msg="StartContainer for \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\"" Jun 25 18:41:39.014694 systemd[1]: Started cri-containerd-68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8.scope - libcontainer container 68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8. Jun 25 18:41:39.045799 containerd[1443]: time="2024-06-25T18:41:39.045745242Z" level=info msg="StartContainer for \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\" returns successfully" Jun 25 18:41:39.059285 systemd[1]: cri-containerd-68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8.scope: Deactivated successfully. Jun 25 18:41:39.117927 kubelet[2512]: E0625 18:41:39.117893 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:39.518377 containerd[1443]: time="2024-06-25T18:41:39.518290538Z" level=info msg="shim disconnected" id=68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8 namespace=k8s.io Jun 25 18:41:39.518377 containerd[1443]: time="2024-06-25T18:41:39.518357281Z" level=warning msg="cleaning up after shim disconnected" id=68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8 namespace=k8s.io Jun 25 18:41:39.518377 containerd[1443]: time="2024-06-25T18:41:39.518369756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:41:39.968775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8-rootfs.mount: Deactivated successfully. Jun 25 18:41:40.120801 kubelet[2512]: E0625 18:41:40.120772 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:40.122533 containerd[1443]: time="2024-06-25T18:41:40.122477158Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:41:40.148833 containerd[1443]: time="2024-06-25T18:41:40.148768165Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\"" Jun 25 18:41:40.149713 containerd[1443]: time="2024-06-25T18:41:40.149574989Z" level=info msg="StartContainer for \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\"" Jun 25 18:41:40.185806 systemd[1]: Started cri-containerd-99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8.scope - libcontainer container 99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8. Jun 25 18:41:40.213800 containerd[1443]: time="2024-06-25T18:41:40.213743154Z" level=info msg="StartContainer for \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\" returns successfully" Jun 25 18:41:40.225692 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:41:40.226197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:41:40.226300 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:41:40.233998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:41:40.234259 systemd[1]: cri-containerd-99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8.scope: Deactivated successfully. Jun 25 18:41:40.255198 containerd[1443]: time="2024-06-25T18:41:40.255127005Z" level=info msg="shim disconnected" id=99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8 namespace=k8s.io Jun 25 18:41:40.255198 containerd[1443]: time="2024-06-25T18:41:40.255184940Z" level=warning msg="cleaning up after shim disconnected" id=99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8 namespace=k8s.io Jun 25 18:41:40.255198 containerd[1443]: time="2024-06-25T18:41:40.255194870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:41:40.262211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:41:40.806943 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:46528.service - OpenSSH per-connection server daemon (10.0.0.1:46528). Jun 25 18:41:40.845497 sshd[3105]: Accepted publickey for core from 10.0.0.1 port 46528 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:41:40.847305 sshd[3105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:41:40.852022 systemd-logind[1428]: New session 10 of user core. Jun 25 18:41:40.864786 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:41:40.968821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8-rootfs.mount: Deactivated successfully. Jun 25 18:41:41.008271 sshd[3105]: pam_unix(sshd:session): session closed for user core Jun 25 18:41:41.012596 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:46528.service: Deactivated successfully. Jun 25 18:41:41.015317 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:41:41.016075 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:41:41.016941 systemd-logind[1428]: Removed session 10. Jun 25 18:41:41.123967 kubelet[2512]: E0625 18:41:41.123844 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:41.125799 containerd[1443]: time="2024-06-25T18:41:41.125760732Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:41:41.144955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346548540.mount: Deactivated successfully. Jun 25 18:41:41.147699 containerd[1443]: time="2024-06-25T18:41:41.147662927Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\"" Jun 25 18:41:41.148153 containerd[1443]: time="2024-06-25T18:41:41.148133136Z" level=info msg="StartContainer for \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\"" Jun 25 18:41:41.179654 systemd[1]: Started cri-containerd-fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39.scope - libcontainer container fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39. Jun 25 18:41:41.211574 systemd[1]: cri-containerd-fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39.scope: Deactivated successfully. Jun 25 18:41:41.212622 containerd[1443]: time="2024-06-25T18:41:41.212574406Z" level=info msg="StartContainer for \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\" returns successfully" Jun 25 18:41:41.256211 containerd[1443]: time="2024-06-25T18:41:41.256147105Z" level=info msg="shim disconnected" id=fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39 namespace=k8s.io Jun 25 18:41:41.256211 containerd[1443]: time="2024-06-25T18:41:41.256204750Z" level=warning msg="cleaning up after shim disconnected" id=fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39 namespace=k8s.io Jun 25 18:41:41.256211 containerd[1443]: time="2024-06-25T18:41:41.256213658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:41:41.968371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39-rootfs.mount: Deactivated successfully. Jun 25 18:41:42.127245 kubelet[2512]: E0625 18:41:42.127217 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:42.129270 containerd[1443]: time="2024-06-25T18:41:42.129230625Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:41:42.148069 containerd[1443]: time="2024-06-25T18:41:42.148025607Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\"" Jun 25 18:41:42.148629 containerd[1443]: time="2024-06-25T18:41:42.148586976Z" level=info msg="StartContainer for \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\"" Jun 25 18:41:42.177600 systemd[1]: Started cri-containerd-74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b.scope - libcontainer container 74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b. Jun 25 18:41:42.204154 systemd[1]: cri-containerd-74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b.scope: Deactivated successfully. Jun 25 18:41:42.206951 containerd[1443]: time="2024-06-25T18:41:42.206765656Z" level=info msg="StartContainer for \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\" returns successfully" Jun 25 18:41:42.232782 containerd[1443]: time="2024-06-25T18:41:42.232645480Z" level=info msg="shim disconnected" id=74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b namespace=k8s.io Jun 25 18:41:42.232782 containerd[1443]: time="2024-06-25T18:41:42.232696742Z" level=warning msg="cleaning up after shim disconnected" id=74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b namespace=k8s.io Jun 25 18:41:42.232782 containerd[1443]: time="2024-06-25T18:41:42.232705510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:41:42.968465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b-rootfs.mount: Deactivated successfully. Jun 25 18:41:43.131479 kubelet[2512]: E0625 18:41:43.131413 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:43.133705 containerd[1443]: time="2024-06-25T18:41:43.133652224Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:41:43.202673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306854110.mount: Deactivated successfully. Jun 25 18:41:43.205732 containerd[1443]: time="2024-06-25T18:41:43.205667237Z" level=info msg="CreateContainer within sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\"" Jun 25 18:41:43.206238 containerd[1443]: time="2024-06-25T18:41:43.206194136Z" level=info msg="StartContainer for \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\"" Jun 25 18:41:43.236693 systemd[1]: Started cri-containerd-d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8.scope - libcontainer container d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8. Jun 25 18:41:43.271104 containerd[1443]: time="2024-06-25T18:41:43.271045731Z" level=info msg="StartContainer for \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\" returns successfully" Jun 25 18:41:43.355990 kubelet[2512]: I0625 18:41:43.355940 2512 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:41:43.374994 kubelet[2512]: I0625 18:41:43.374929 2512 topology_manager.go:215] "Topology Admit Handler" podUID="9ec7f7e8-3099-48fc-9055-67f39fab0558" podNamespace="kube-system" podName="coredns-5dd5756b68-chwcw" Jun 25 18:41:43.377492 kubelet[2512]: I0625 18:41:43.376713 2512 topology_manager.go:215] "Topology Admit Handler" podUID="76469421-732e-44eb-9d9d-6066bd4df8c0" podNamespace="kube-system" podName="coredns-5dd5756b68-vznnr" Jun 25 18:41:43.388328 systemd[1]: Created slice kubepods-burstable-pod9ec7f7e8_3099_48fc_9055_67f39fab0558.slice - libcontainer container kubepods-burstable-pod9ec7f7e8_3099_48fc_9055_67f39fab0558.slice. Jun 25 18:41:43.400871 systemd[1]: Created slice kubepods-burstable-pod76469421_732e_44eb_9d9d_6066bd4df8c0.slice - libcontainer container kubepods-burstable-pod76469421_732e_44eb_9d9d_6066bd4df8c0.slice. Jun 25 18:41:43.540778 kubelet[2512]: I0625 18:41:43.540580 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9v5c\" (UniqueName: \"kubernetes.io/projected/76469421-732e-44eb-9d9d-6066bd4df8c0-kube-api-access-l9v5c\") pod \"coredns-5dd5756b68-vznnr\" (UID: \"76469421-732e-44eb-9d9d-6066bd4df8c0\") " pod="kube-system/coredns-5dd5756b68-vznnr" Jun 25 18:41:43.540778 kubelet[2512]: I0625 18:41:43.540629 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfl4\" (UniqueName: \"kubernetes.io/projected/9ec7f7e8-3099-48fc-9055-67f39fab0558-kube-api-access-4kfl4\") pod \"coredns-5dd5756b68-chwcw\" (UID: \"9ec7f7e8-3099-48fc-9055-67f39fab0558\") " pod="kube-system/coredns-5dd5756b68-chwcw" Jun 25 18:41:43.540778 kubelet[2512]: I0625 18:41:43.540651 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76469421-732e-44eb-9d9d-6066bd4df8c0-config-volume\") pod \"coredns-5dd5756b68-vznnr\" (UID: \"76469421-732e-44eb-9d9d-6066bd4df8c0\") " pod="kube-system/coredns-5dd5756b68-vznnr" Jun 25 18:41:43.540778 kubelet[2512]: I0625 18:41:43.540671 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ec7f7e8-3099-48fc-9055-67f39fab0558-config-volume\") pod \"coredns-5dd5756b68-chwcw\" (UID: \"9ec7f7e8-3099-48fc-9055-67f39fab0558\") " pod="kube-system/coredns-5dd5756b68-chwcw" Jun 25 18:41:43.698619 kubelet[2512]: E0625 18:41:43.698591 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:43.705985 kubelet[2512]: E0625 18:41:43.705926 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:43.712988 containerd[1443]: time="2024-06-25T18:41:43.712939738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-chwcw,Uid:9ec7f7e8-3099-48fc-9055-67f39fab0558,Namespace:kube-system,Attempt:0,}" Jun 25 18:41:43.713295 containerd[1443]: time="2024-06-25T18:41:43.713016200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vznnr,Uid:76469421-732e-44eb-9d9d-6066bd4df8c0,Namespace:kube-system,Attempt:0,}" Jun 25 18:41:44.136100 kubelet[2512]: E0625 18:41:44.136063 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:44.149248 kubelet[2512]: I0625 18:41:44.149046 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6vxwj" podStartSLOduration=6.953820594 podCreationTimestamp="2024-06-25 18:41:25 +0000 UTC" firstStartedPulling="2024-06-25 18:41:26.760197651 +0000 UTC m=+14.805099926" lastFinishedPulling="2024-06-25 18:41:38.955374209 +0000 UTC m=+27.000276484" observedRunningTime="2024-06-25 18:41:44.148540173 +0000 UTC m=+32.193442448" watchObservedRunningTime="2024-06-25 18:41:44.148997152 +0000 UTC m=+32.193899427" Jun 25 18:41:45.138082 kubelet[2512]: E0625 18:41:45.138047 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:45.445983 systemd-networkd[1368]: cilium_host: Link UP Jun 25 18:41:45.446224 systemd-networkd[1368]: cilium_net: Link UP Jun 25 18:41:45.446476 systemd-networkd[1368]: cilium_net: Gained carrier Jun 25 18:41:45.446755 systemd-networkd[1368]: cilium_host: Gained carrier Jun 25 18:41:45.446945 systemd-networkd[1368]: cilium_net: Gained IPv6LL Jun 25 18:41:45.447175 systemd-networkd[1368]: cilium_host: Gained IPv6LL Jun 25 18:41:45.552993 systemd-networkd[1368]: cilium_vxlan: Link UP Jun 25 18:41:45.553008 systemd-networkd[1368]: cilium_vxlan: Gained carrier Jun 25 18:41:45.759547 kernel: NET: Registered PF_ALG protocol family Jun 25 18:41:46.024892 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:43576.service - OpenSSH per-connection server daemon (10.0.0.1:43576). Jun 25 18:41:46.064823 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 43576 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:41:46.066430 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:41:46.071309 systemd-logind[1428]: New session 11 of user core. Jun 25 18:41:46.081680 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:41:46.139974 kubelet[2512]: E0625 18:41:46.139941 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:46.263763 sshd[3536]: pam_unix(sshd:session): session closed for user core Jun 25 18:41:46.268193 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:43576.service: Deactivated successfully. Jun 25 18:41:46.270375 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:41:46.271187 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:41:46.272263 systemd-logind[1428]: Removed session 11. Jun 25 18:41:46.407354 systemd-networkd[1368]: lxc_health: Link UP Jun 25 18:41:46.418466 systemd-networkd[1368]: lxc_health: Gained carrier Jun 25 18:41:46.923635 systemd-networkd[1368]: lxc98bfaee1600b: Link UP Jun 25 18:41:46.930529 systemd-networkd[1368]: lxcea39c26771a6: Link UP Jun 25 18:41:46.938539 kernel: eth0: renamed from tmp733f1 Jun 25 18:41:46.945557 kernel: eth0: renamed from tmp428a5 Jun 25 18:41:46.951331 systemd-networkd[1368]: lxcea39c26771a6: Gained carrier Jun 25 18:41:46.952130 systemd-networkd[1368]: lxc98bfaee1600b: Gained carrier Jun 25 18:41:47.141722 kubelet[2512]: E0625 18:41:47.141690 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:47.369665 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Jun 25 18:41:47.625745 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jun 25 18:41:48.142868 kubelet[2512]: E0625 18:41:48.142742 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:48.459672 systemd-networkd[1368]: lxc98bfaee1600b: Gained IPv6LL Jun 25 18:41:48.649768 systemd-networkd[1368]: lxcea39c26771a6: Gained IPv6LL Jun 25 18:41:49.144314 kubelet[2512]: E0625 18:41:49.144281 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:50.576038 containerd[1443]: time="2024-06-25T18:41:50.575647768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:41:50.576038 containerd[1443]: time="2024-06-25T18:41:50.575702606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:50.576038 containerd[1443]: time="2024-06-25T18:41:50.575717195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:41:50.576038 containerd[1443]: time="2024-06-25T18:41:50.575726623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:50.576779 containerd[1443]: time="2024-06-25T18:41:50.575980154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:41:50.576779 containerd[1443]: time="2024-06-25T18:41:50.576071975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:50.576779 containerd[1443]: time="2024-06-25T18:41:50.576092897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:41:50.576779 containerd[1443]: time="2024-06-25T18:41:50.576105742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:41:50.608772 systemd[1]: Started cri-containerd-428a5209c75ee7c7ae838d4cb3c1c73271df59dab69b31d3982ec76a1e70b7ef.scope - libcontainer container 428a5209c75ee7c7ae838d4cb3c1c73271df59dab69b31d3982ec76a1e70b7ef. Jun 25 18:41:50.610798 systemd[1]: Started cri-containerd-733f1b2d2142ac39e6a6480162093c8a6ca6c17494d79347a329a924c481bf3a.scope - libcontainer container 733f1b2d2142ac39e6a6480162093c8a6ca6c17494d79347a329a924c481bf3a. Jun 25 18:41:50.624615 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:41:50.626653 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:41:50.653439 containerd[1443]: time="2024-06-25T18:41:50.653394200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-chwcw,Uid:9ec7f7e8-3099-48fc-9055-67f39fab0558,Namespace:kube-system,Attempt:0,} returns sandbox id \"428a5209c75ee7c7ae838d4cb3c1c73271df59dab69b31d3982ec76a1e70b7ef\"" Jun 25 18:41:50.654896 kubelet[2512]: E0625 18:41:50.654665 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:50.657604 containerd[1443]: time="2024-06-25T18:41:50.657154953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vznnr,Uid:76469421-732e-44eb-9d9d-6066bd4df8c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"733f1b2d2142ac39e6a6480162093c8a6ca6c17494d79347a329a924c481bf3a\"" Jun 25 18:41:50.658686 kubelet[2512]: E0625 18:41:50.657811 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:50.660874 containerd[1443]: time="2024-06-25T18:41:50.660827601Z" level=info msg="CreateContainer within sandbox \"428a5209c75ee7c7ae838d4cb3c1c73271df59dab69b31d3982ec76a1e70b7ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:41:50.661041 containerd[1443]: time="2024-06-25T18:41:50.661012205Z" level=info msg="CreateContainer within sandbox \"733f1b2d2142ac39e6a6480162093c8a6ca6c17494d79347a329a924c481bf3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:41:50.744722 containerd[1443]: time="2024-06-25T18:41:50.744671268Z" level=info msg="CreateContainer within sandbox \"428a5209c75ee7c7ae838d4cb3c1c73271df59dab69b31d3982ec76a1e70b7ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7297f32a1e1e951409ab72855c8c6004a1d2171aaef9d8e9771da7e2cf6ff2a9\"" Jun 25 18:41:50.745323 containerd[1443]: time="2024-06-25T18:41:50.745285350Z" level=info msg="StartContainer for \"7297f32a1e1e951409ab72855c8c6004a1d2171aaef9d8e9771da7e2cf6ff2a9\"" Jun 25 18:41:50.745813 containerd[1443]: time="2024-06-25T18:41:50.745761179Z" level=info msg="CreateContainer within sandbox \"733f1b2d2142ac39e6a6480162093c8a6ca6c17494d79347a329a924c481bf3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ece81aec2b7a8c21b1173bf078c8d8d15608ca9f642d30b7058dde490a368e4\"" Jun 25 18:41:50.755060 containerd[1443]: time="2024-06-25T18:41:50.755010505Z" level=info msg="StartContainer for \"1ece81aec2b7a8c21b1173bf078c8d8d15608ca9f642d30b7058dde490a368e4\"" Jun 25 18:41:50.777671 systemd[1]: Started cri-containerd-7297f32a1e1e951409ab72855c8c6004a1d2171aaef9d8e9771da7e2cf6ff2a9.scope - libcontainer container 7297f32a1e1e951409ab72855c8c6004a1d2171aaef9d8e9771da7e2cf6ff2a9. Jun 25 18:41:50.793801 systemd[1]: Started cri-containerd-1ece81aec2b7a8c21b1173bf078c8d8d15608ca9f642d30b7058dde490a368e4.scope - libcontainer container 1ece81aec2b7a8c21b1173bf078c8d8d15608ca9f642d30b7058dde490a368e4. Jun 25 18:41:50.816380 containerd[1443]: time="2024-06-25T18:41:50.816321007Z" level=info msg="StartContainer for \"7297f32a1e1e951409ab72855c8c6004a1d2171aaef9d8e9771da7e2cf6ff2a9\" returns successfully" Jun 25 18:41:50.825733 containerd[1443]: time="2024-06-25T18:41:50.825693185Z" level=info msg="StartContainer for \"1ece81aec2b7a8c21b1173bf078c8d8d15608ca9f642d30b7058dde490a368e4\" returns successfully" Jun 25 18:41:51.149134 kubelet[2512]: E0625 18:41:51.149084 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:51.153584 kubelet[2512]: E0625 18:41:51.153188 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:51.177556 kubelet[2512]: I0625 18:41:51.177482 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-chwcw" podStartSLOduration=26.177432834 podCreationTimestamp="2024-06-25 18:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:41:51.17686294 +0000 UTC m=+39.221765235" watchObservedRunningTime="2024-06-25 18:41:51.177432834 +0000 UTC m=+39.222335109" Jun 25 18:41:51.178867 kubelet[2512]: I0625 18:41:51.178819 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vznnr" podStartSLOduration=26.178764169 podCreationTimestamp="2024-06-25 18:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:41:51.1645553 +0000 UTC m=+39.209457575" watchObservedRunningTime="2024-06-25 18:41:51.178764169 +0000 UTC m=+39.223666444" Jun 25 18:41:51.284914 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:43588.service - OpenSSH per-connection server daemon (10.0.0.1:43588). Jun 25 18:41:51.341244 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 43588 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:41:51.343035 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:41:51.347664 systemd-logind[1428]: New session 12 of user core. Jun 25 18:41:51.361698 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:41:51.557217 sshd[3937]: pam_unix(sshd:session): session closed for user core Jun 25 18:41:51.562610 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:43588.service: Deactivated successfully. Jun 25 18:41:51.564931 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:41:51.565670 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:41:51.566621 systemd-logind[1428]: Removed session 12. Jun 25 18:41:52.154703 kubelet[2512]: E0625 18:41:52.154317 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:52.154703 kubelet[2512]: E0625 18:41:52.154533 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:53.155934 kubelet[2512]: E0625 18:41:53.155891 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:53.156300 kubelet[2512]: E0625 18:41:53.156027 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:41:56.570794 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:57772.service - OpenSSH per-connection server daemon (10.0.0.1:57772). Jun 25 18:41:56.612174 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 57772 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:41:56.614222 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:41:56.618556 systemd-logind[1428]: New session 13 of user core. Jun 25 18:41:56.631645 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:41:56.761937 sshd[3961]: pam_unix(sshd:session): session closed for user core Jun 25 18:41:56.773467 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:57772.service: Deactivated successfully. Jun 25 18:41:56.775367 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:41:56.777302 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:41:56.794988 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:57780.service - OpenSSH per-connection server daemon (10.0.0.1:57780). Jun 25 18:41:56.796075 systemd-logind[1428]: Removed session 13. Jun 25 18:41:56.827973 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 57780 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:41:56.829529 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:41:56.833595 systemd-logind[1428]: New session 14 of user core. Jun 25 18:41:56.846772 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:41:57.542396 sshd[3977]: pam_unix(sshd:session): session closed for user core Jun 25 18:41:57.554992 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:57780.service: Deactivated successfully. Jun 25 18:41:57.558464 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:41:57.561471 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:41:57.567843 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). Jun 25 18:41:57.568595 systemd-logind[1428]: Removed session 14. Jun 25 18:41:57.604385 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:41:57.605987 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:41:57.609898 systemd-logind[1428]: New session 15 of user core. Jun 25 18:41:57.619627 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:41:57.733374 sshd[3991]: pam_unix(sshd:session): session closed for user core Jun 25 18:41:57.737594 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:57786.service: Deactivated successfully. Jun 25 18:41:57.739729 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:41:57.740481 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:41:57.741409 systemd-logind[1428]: Removed session 15. Jun 25 18:42:02.745777 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:57794.service - OpenSSH per-connection server daemon (10.0.0.1:57794). Jun 25 18:42:02.786028 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 57794 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:02.787809 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:02.791951 systemd-logind[1428]: New session 16 of user core. Jun 25 18:42:02.800660 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:42:02.910262 sshd[4005]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:02.914993 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:57794.service: Deactivated successfully. Jun 25 18:42:02.917074 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:42:02.917884 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:42:02.919117 systemd-logind[1428]: Removed session 16. Jun 25 18:42:07.923179 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:55612.service - OpenSSH per-connection server daemon (10.0.0.1:55612). Jun 25 18:42:07.965809 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 55612 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:07.967655 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:07.972102 systemd-logind[1428]: New session 17 of user core. Jun 25 18:42:07.984770 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:42:08.094833 sshd[4019]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:08.099322 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:55612.service: Deactivated successfully. Jun 25 18:42:08.101480 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:42:08.102227 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:42:08.103146 systemd-logind[1428]: Removed session 17. Jun 25 18:42:13.106277 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:55618.service - OpenSSH per-connection server daemon (10.0.0.1:55618). Jun 25 18:42:13.144492 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 55618 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:13.146417 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:13.151220 systemd-logind[1428]: New session 18 of user core. Jun 25 18:42:13.160662 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:42:13.279812 sshd[4036]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:13.290037 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:55618.service: Deactivated successfully. Jun 25 18:42:13.292619 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:42:13.294498 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:42:13.304876 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:55622.service - OpenSSH per-connection server daemon (10.0.0.1:55622). Jun 25 18:42:13.306252 systemd-logind[1428]: Removed session 18. Jun 25 18:42:13.340498 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 55622 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:13.342241 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:13.347122 systemd-logind[1428]: New session 19 of user core. Jun 25 18:42:13.359712 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:42:13.603681 sshd[4051]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:13.612393 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:55622.service: Deactivated successfully. Jun 25 18:42:13.614828 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:42:13.616477 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:42:13.618137 systemd[1]: Started sshd@19-10.0.0.103:22-10.0.0.1:55638.service - OpenSSH per-connection server daemon (10.0.0.1:55638). Jun 25 18:42:13.619101 systemd-logind[1428]: Removed session 19. Jun 25 18:42:13.660712 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 55638 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:13.662305 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:13.666561 systemd-logind[1428]: New session 20 of user core. Jun 25 18:42:13.675633 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:42:14.545789 sshd[4063]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:14.556226 systemd[1]: sshd@19-10.0.0.103:22-10.0.0.1:55638.service: Deactivated successfully. Jun 25 18:42:14.559083 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:42:14.561330 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:42:14.571972 systemd[1]: Started sshd@20-10.0.0.103:22-10.0.0.1:55648.service - OpenSSH per-connection server daemon (10.0.0.1:55648). Jun 25 18:42:14.573603 systemd-logind[1428]: Removed session 20. Jun 25 18:42:14.607097 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 55648 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:14.608708 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:14.612858 systemd-logind[1428]: New session 21 of user core. Jun 25 18:42:14.617672 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:42:14.915135 sshd[4088]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:14.924343 systemd[1]: sshd@20-10.0.0.103:22-10.0.0.1:55648.service: Deactivated successfully. Jun 25 18:42:14.927999 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:42:14.930976 systemd-logind[1428]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:42:14.943881 systemd[1]: Started sshd@21-10.0.0.103:22-10.0.0.1:55654.service - OpenSSH per-connection server daemon (10.0.0.1:55654). Jun 25 18:42:14.945793 systemd-logind[1428]: Removed session 21. Jun 25 18:42:14.990366 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 55654 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:14.991918 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:14.996675 systemd-logind[1428]: New session 22 of user core. Jun 25 18:42:15.003697 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:42:15.111029 sshd[4100]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:15.115324 systemd[1]: sshd@21-10.0.0.103:22-10.0.0.1:55654.service: Deactivated successfully. Jun 25 18:42:15.117438 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:42:15.118249 systemd-logind[1428]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:42:15.119182 systemd-logind[1428]: Removed session 22. Jun 25 18:42:19.051691 kubelet[2512]: E0625 18:42:19.051653 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:20.121292 systemd[1]: Started sshd@22-10.0.0.103:22-10.0.0.1:49760.service - OpenSSH per-connection server daemon (10.0.0.1:49760). Jun 25 18:42:20.158472 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 49760 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:20.159911 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:20.163852 systemd-logind[1428]: New session 23 of user core. Jun 25 18:42:20.174644 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:42:20.276474 sshd[4115]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:20.280180 systemd[1]: sshd@22-10.0.0.103:22-10.0.0.1:49760.service: Deactivated successfully. Jun 25 18:42:20.282271 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:42:20.282925 systemd-logind[1428]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:42:20.283905 systemd-logind[1428]: Removed session 23. Jun 25 18:42:25.293811 systemd[1]: Started sshd@23-10.0.0.103:22-10.0.0.1:49768.service - OpenSSH per-connection server daemon (10.0.0.1:49768). Jun 25 18:42:25.335648 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 49768 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:25.337189 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:25.341970 systemd-logind[1428]: New session 24 of user core. Jun 25 18:42:25.352641 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:42:25.466218 sshd[4132]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:25.471038 systemd[1]: sshd@23-10.0.0.103:22-10.0.0.1:49768.service: Deactivated successfully. Jun 25 18:42:25.473753 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:42:25.474646 systemd-logind[1428]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:42:25.475699 systemd-logind[1428]: Removed session 24. Jun 25 18:42:30.477552 systemd[1]: Started sshd@24-10.0.0.103:22-10.0.0.1:59928.service - OpenSSH per-connection server daemon (10.0.0.1:59928). Jun 25 18:42:30.516548 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 59928 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:30.518191 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:30.522168 systemd-logind[1428]: New session 25 of user core. Jun 25 18:42:30.530696 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:42:30.632888 sshd[4148]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:30.637873 systemd[1]: sshd@24-10.0.0.103:22-10.0.0.1:59928.service: Deactivated successfully. Jun 25 18:42:30.640633 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:42:30.641405 systemd-logind[1428]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:42:30.642358 systemd-logind[1428]: Removed session 25. Jun 25 18:42:33.051470 kubelet[2512]: E0625 18:42:33.051402 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:35.644612 systemd[1]: Started sshd@25-10.0.0.103:22-10.0.0.1:59944.service - OpenSSH per-connection server daemon (10.0.0.1:59944). Jun 25 18:42:35.683201 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 59944 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:35.684840 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:35.688822 systemd-logind[1428]: New session 26 of user core. Jun 25 18:42:35.697651 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:42:35.799040 sshd[4162]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:35.808210 systemd[1]: sshd@25-10.0.0.103:22-10.0.0.1:59944.service: Deactivated successfully. Jun 25 18:42:35.810014 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:42:35.811739 systemd-logind[1428]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:42:35.821775 systemd[1]: Started sshd@26-10.0.0.103:22-10.0.0.1:59952.service - OpenSSH per-connection server daemon (10.0.0.1:59952). Jun 25 18:42:35.822733 systemd-logind[1428]: Removed session 26. Jun 25 18:42:35.855643 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 59952 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:35.856944 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:35.861348 systemd-logind[1428]: New session 27 of user core. Jun 25 18:42:35.866773 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:42:37.051225 kubelet[2512]: E0625 18:42:37.051187 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:37.201317 containerd[1443]: time="2024-06-25T18:42:37.201257574Z" level=info msg="StopContainer for \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\" with timeout 30 (s)" Jun 25 18:42:37.214882 containerd[1443]: time="2024-06-25T18:42:37.214839454Z" level=info msg="Stop container \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\" with signal terminated" Jun 25 18:42:37.234497 systemd[1]: run-containerd-runc-k8s.io-d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8-runc.6W0TDE.mount: Deactivated successfully. Jun 25 18:42:37.235247 systemd[1]: cri-containerd-62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f.scope: Deactivated successfully. Jun 25 18:42:37.257285 containerd[1443]: time="2024-06-25T18:42:37.257218272Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:42:37.257929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f-rootfs.mount: Deactivated successfully. Jun 25 18:42:37.258872 containerd[1443]: time="2024-06-25T18:42:37.258849608Z" level=info msg="StopContainer for \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\" with timeout 2 (s)" Jun 25 18:42:37.259128 containerd[1443]: time="2024-06-25T18:42:37.259106996Z" level=info msg="Stop container \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\" with signal terminated" Jun 25 18:42:37.266326 systemd-networkd[1368]: lxc_health: Link DOWN Jun 25 18:42:37.266337 systemd-networkd[1368]: lxc_health: Lost carrier Jun 25 18:42:37.270168 containerd[1443]: time="2024-06-25T18:42:37.270110286Z" level=info msg="shim disconnected" id=62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f namespace=k8s.io Jun 25 18:42:37.270283 containerd[1443]: time="2024-06-25T18:42:37.270171171Z" level=warning msg="cleaning up after shim disconnected" id=62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f namespace=k8s.io Jun 25 18:42:37.270283 containerd[1443]: time="2024-06-25T18:42:37.270182542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:42:37.292974 systemd[1]: cri-containerd-d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8.scope: Deactivated successfully. Jun 25 18:42:37.293687 systemd[1]: cri-containerd-d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8.scope: Consumed 7.156s CPU time. Jun 25 18:42:37.295795 containerd[1443]: time="2024-06-25T18:42:37.295760151Z" level=info msg="StopContainer for \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\" returns successfully" Jun 25 18:42:37.300538 containerd[1443]: time="2024-06-25T18:42:37.300468289Z" level=info msg="StopPodSandbox for \"54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350\"" Jun 25 18:42:37.303977 containerd[1443]: time="2024-06-25T18:42:37.300542630Z" level=info msg="Container to stop \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:42:37.306616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350-shm.mount: Deactivated successfully. Jun 25 18:42:37.313531 systemd[1]: cri-containerd-54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350.scope: Deactivated successfully. Jun 25 18:42:37.316658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8-rootfs.mount: Deactivated successfully. Jun 25 18:42:37.323534 containerd[1443]: time="2024-06-25T18:42:37.323466706Z" level=info msg="shim disconnected" id=d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8 namespace=k8s.io Jun 25 18:42:37.323534 containerd[1443]: time="2024-06-25T18:42:37.323531719Z" level=warning msg="cleaning up after shim disconnected" id=d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8 namespace=k8s.io Jun 25 18:42:37.323739 containerd[1443]: time="2024-06-25T18:42:37.323542530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:42:37.338402 containerd[1443]: time="2024-06-25T18:42:37.338321293Z" level=info msg="shim disconnected" id=54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350 namespace=k8s.io Jun 25 18:42:37.338402 containerd[1443]: time="2024-06-25T18:42:37.338377670Z" level=warning msg="cleaning up after shim disconnected" id=54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350 namespace=k8s.io Jun 25 18:42:37.338402 containerd[1443]: time="2024-06-25T18:42:37.338388892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:42:37.343969 containerd[1443]: time="2024-06-25T18:42:37.343929255Z" level=info msg="StopContainer for \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\" returns successfully" Jun 25 18:42:37.344437 containerd[1443]: time="2024-06-25T18:42:37.344410596Z" level=info msg="StopPodSandbox for \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\"" Jun 25 18:42:37.344525 containerd[1443]: time="2024-06-25T18:42:37.344456222Z" level=info msg="Container to stop \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:42:37.344567 containerd[1443]: time="2024-06-25T18:42:37.344501869Z" level=info msg="Container to stop \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:42:37.344567 containerd[1443]: time="2024-06-25T18:42:37.344535802Z" level=info msg="Container to stop \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:42:37.344567 containerd[1443]: time="2024-06-25T18:42:37.344548978Z" level=info msg="Container to stop \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:42:37.344567 containerd[1443]: time="2024-06-25T18:42:37.344562333Z" level=info msg="Container to stop \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:42:37.351183 systemd[1]: cri-containerd-35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89.scope: Deactivated successfully. Jun 25 18:42:37.352470 containerd[1443]: time="2024-06-25T18:42:37.352436963Z" level=info msg="TearDown network for sandbox \"54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350\" successfully" Jun 25 18:42:37.352725 containerd[1443]: time="2024-06-25T18:42:37.352572440Z" level=info msg="StopPodSandbox for \"54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350\" returns successfully" Jun 25 18:42:37.375417 containerd[1443]: time="2024-06-25T18:42:37.375326283Z" level=info msg="shim disconnected" id=35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89 namespace=k8s.io Jun 25 18:42:37.375417 containerd[1443]: time="2024-06-25T18:42:37.375380165Z" level=warning msg="cleaning up after shim disconnected" id=35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89 namespace=k8s.io Jun 25 18:42:37.375417 containerd[1443]: time="2024-06-25T18:42:37.375390084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:42:37.388800 containerd[1443]: time="2024-06-25T18:42:37.388733392Z" level=info msg="TearDown network for sandbox \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" successfully" Jun 25 18:42:37.388800 containerd[1443]: time="2024-06-25T18:42:37.388772046Z" level=info msg="StopPodSandbox for \"35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89\" returns successfully" Jun 25 18:42:37.537297 kubelet[2512]: I0625 18:42:37.537243 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-hostproc\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.537297 kubelet[2512]: I0625 18:42:37.537301 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvx2s\" (UniqueName: \"kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-kube-api-access-jvx2s\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.537936 kubelet[2512]: I0625 18:42:37.537325 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-cgroup\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.537936 kubelet[2512]: I0625 18:42:37.537317 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-hostproc" (OuterVolumeSpecName: "hostproc") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.537936 kubelet[2512]: I0625 18:42:37.537353 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxn7z\" (UniqueName: \"kubernetes.io/projected/13770004-3c48-4ed4-9d3c-2bfd60173d34-kube-api-access-vxn7z\") pod \"13770004-3c48-4ed4-9d3c-2bfd60173d34\" (UID: \"13770004-3c48-4ed4-9d3c-2bfd60173d34\") " Jun 25 18:42:37.537936 kubelet[2512]: I0625 18:42:37.537375 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-lib-modules\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.537936 kubelet[2512]: I0625 18:42:37.537375 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.537936 kubelet[2512]: I0625 18:42:37.537396 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-run\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538135 kubelet[2512]: I0625 18:42:37.537411 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-host-proc-sys-kernel\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538135 kubelet[2512]: I0625 18:42:37.537438 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13770004-3c48-4ed4-9d3c-2bfd60173d34-cilium-config-path\") pod \"13770004-3c48-4ed4-9d3c-2bfd60173d34\" (UID: \"13770004-3c48-4ed4-9d3c-2bfd60173d34\") " Jun 25 18:42:37.538135 kubelet[2512]: I0625 18:42:37.537445 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.538135 kubelet[2512]: I0625 18:42:37.537466 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-etc-cni-netd\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538135 kubelet[2512]: I0625 18:42:37.537470 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.538308 kubelet[2512]: I0625 18:42:37.537492 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-hubble-tls\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538308 kubelet[2512]: I0625 18:42:37.537532 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cni-path\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538308 kubelet[2512]: I0625 18:42:37.537555 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-host-proc-sys-net\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538308 kubelet[2512]: I0625 18:42:37.537577 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-bpf-maps\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538308 kubelet[2512]: I0625 18:42:37.537608 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-config-path\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538308 kubelet[2512]: I0625 18:42:37.537633 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-xtables-lock\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538498 kubelet[2512]: I0625 18:42:37.537659 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-clustermesh-secrets\") pod \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\" (UID: \"4e488bb2-53dd-47c1-b0d1-f8f946f7269e\") " Jun 25 18:42:37.538498 kubelet[2512]: I0625 18:42:37.537690 2512 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.538498 kubelet[2512]: I0625 18:42:37.537706 2512 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.538498 kubelet[2512]: I0625 18:42:37.537731 2512 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.538498 kubelet[2512]: I0625 18:42:37.537754 2512 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.540582 kubelet[2512]: I0625 18:42:37.540555 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.541142 kubelet[2512]: I0625 18:42:37.541076 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.542237 kubelet[2512]: I0625 18:42:37.541342 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.542237 kubelet[2512]: I0625 18:42:37.541374 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cni-path" (OuterVolumeSpecName: "cni-path") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.542237 kubelet[2512]: I0625 18:42:37.541407 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.542237 kubelet[2512]: I0625 18:42:37.541714 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13770004-3c48-4ed4-9d3c-2bfd60173d34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "13770004-3c48-4ed4-9d3c-2bfd60173d34" (UID: "13770004-3c48-4ed4-9d3c-2bfd60173d34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:42:37.542237 kubelet[2512]: I0625 18:42:37.541799 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13770004-3c48-4ed4-9d3c-2bfd60173d34-kube-api-access-vxn7z" (OuterVolumeSpecName: "kube-api-access-vxn7z") pod "13770004-3c48-4ed4-9d3c-2bfd60173d34" (UID: "13770004-3c48-4ed4-9d3c-2bfd60173d34"). InnerVolumeSpecName "kube-api-access-vxn7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:42:37.542530 kubelet[2512]: I0625 18:42:37.541840 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:42:37.542530 kubelet[2512]: I0625 18:42:37.542329 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:42:37.543261 kubelet[2512]: I0625 18:42:37.543210 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-kube-api-access-jvx2s" (OuterVolumeSpecName: "kube-api-access-jvx2s") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "kube-api-access-jvx2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:42:37.543777 kubelet[2512]: I0625 18:42:37.543736 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:42:37.545332 kubelet[2512]: I0625 18:42:37.545310 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4e488bb2-53dd-47c1-b0d1-f8f946f7269e" (UID: "4e488bb2-53dd-47c1-b0d1-f8f946f7269e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:42:37.638180 kubelet[2512]: I0625 18:42:37.638019 2512 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jvx2s\" (UniqueName: \"kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-kube-api-access-jvx2s\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638180 kubelet[2512]: I0625 18:42:37.638066 2512 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638180 kubelet[2512]: I0625 18:42:37.638080 2512 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vxn7z\" (UniqueName: \"kubernetes.io/projected/13770004-3c48-4ed4-9d3c-2bfd60173d34-kube-api-access-vxn7z\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638180 kubelet[2512]: I0625 18:42:37.638097 2512 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13770004-3c48-4ed4-9d3c-2bfd60173d34-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638180 kubelet[2512]: I0625 18:42:37.638110 2512 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638180 kubelet[2512]: I0625 18:42:37.638123 2512 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638180 kubelet[2512]: I0625 18:42:37.638136 2512 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638180 kubelet[2512]: I0625 18:42:37.638148 2512 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638530 kubelet[2512]: I0625 18:42:37.638160 2512 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638530 kubelet[2512]: I0625 18:42:37.638172 2512 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638530 kubelet[2512]: I0625 18:42:37.638184 2512 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:37.638530 kubelet[2512]: I0625 18:42:37.638196 2512 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e488bb2-53dd-47c1-b0d1-f8f946f7269e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 25 18:42:38.057657 systemd[1]: Removed slice kubepods-besteffort-pod13770004_3c48_4ed4_9d3c_2bfd60173d34.slice - libcontainer container kubepods-besteffort-pod13770004_3c48_4ed4_9d3c_2bfd60173d34.slice. Jun 25 18:42:38.058631 systemd[1]: Removed slice kubepods-burstable-pod4e488bb2_53dd_47c1_b0d1_f8f946f7269e.slice - libcontainer container kubepods-burstable-pod4e488bb2_53dd_47c1_b0d1_f8f946f7269e.slice. Jun 25 18:42:38.058707 systemd[1]: kubepods-burstable-pod4e488bb2_53dd_47c1_b0d1_f8f946f7269e.slice: Consumed 7.264s CPU time. Jun 25 18:42:38.229554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89-rootfs.mount: Deactivated successfully. Jun 25 18:42:38.229652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35db8ccfc6e43062663eecc37b27540b81bc3e58bcc4ac68cacfbe5b7e4fec89-shm.mount: Deactivated successfully. Jun 25 18:42:38.229729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54b46d0c766d88913321a2f6c32760d52501930360fe0be00e0cb22ba9ea2350-rootfs.mount: Deactivated successfully. Jun 25 18:42:38.229801 systemd[1]: var-lib-kubelet-pods-13770004\x2d3c48\x2d4ed4\x2d9d3c\x2d2bfd60173d34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxn7z.mount: Deactivated successfully. Jun 25 18:42:38.229878 systemd[1]: var-lib-kubelet-pods-4e488bb2\x2d53dd\x2d47c1\x2db0d1\x2df8f946f7269e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvx2s.mount: Deactivated successfully. Jun 25 18:42:38.229952 systemd[1]: var-lib-kubelet-pods-4e488bb2\x2d53dd\x2d47c1\x2db0d1\x2df8f946f7269e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 18:42:38.230029 systemd[1]: var-lib-kubelet-pods-4e488bb2\x2d53dd\x2d47c1\x2db0d1\x2df8f946f7269e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 18:42:38.259075 kubelet[2512]: I0625 18:42:38.259044 2512 scope.go:117] "RemoveContainer" containerID="62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f" Jun 25 18:42:38.260411 containerd[1443]: time="2024-06-25T18:42:38.260094461Z" level=info msg="RemoveContainer for \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\"" Jun 25 18:42:38.315525 containerd[1443]: time="2024-06-25T18:42:38.315403596Z" level=info msg="RemoveContainer for \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\" returns successfully" Jun 25 18:42:38.315693 kubelet[2512]: I0625 18:42:38.315667 2512 scope.go:117] "RemoveContainer" containerID="62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f" Jun 25 18:42:38.315904 containerd[1443]: time="2024-06-25T18:42:38.315867365Z" level=error msg="ContainerStatus for \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\": not found" Jun 25 18:42:38.323083 kubelet[2512]: E0625 18:42:38.323056 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\": not found" containerID="62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f" Jun 25 18:42:38.323151 kubelet[2512]: I0625 18:42:38.323141 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f"} err="failed to get container status \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\": rpc error: code = NotFound desc = an error occurred when try to find container \"62a57a0d62349147a71652c2e100048ce117cd35146a637cda6f492fb035e03f\": not found" Jun 25 18:42:38.323175 kubelet[2512]: I0625 18:42:38.323155 2512 scope.go:117] "RemoveContainer" containerID="d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8" Jun 25 18:42:38.324208 containerd[1443]: time="2024-06-25T18:42:38.324185464Z" level=info msg="RemoveContainer for \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\"" Jun 25 18:42:38.373678 containerd[1443]: time="2024-06-25T18:42:38.373653519Z" level=info msg="RemoveContainer for \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\" returns successfully" Jun 25 18:42:38.373813 kubelet[2512]: I0625 18:42:38.373785 2512 scope.go:117] "RemoveContainer" containerID="74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b" Jun 25 18:42:38.374454 containerd[1443]: time="2024-06-25T18:42:38.374434206Z" level=info msg="RemoveContainer for \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\"" Jun 25 18:42:38.426589 containerd[1443]: time="2024-06-25T18:42:38.426563170Z" level=info msg="RemoveContainer for \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\" returns successfully" Jun 25 18:42:38.426935 kubelet[2512]: I0625 18:42:38.426852 2512 scope.go:117] "RemoveContainer" containerID="fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39" Jun 25 18:42:38.428157 containerd[1443]: time="2024-06-25T18:42:38.428126360Z" level=info msg="RemoveContainer for \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\"" Jun 25 18:42:38.529659 containerd[1443]: time="2024-06-25T18:42:38.529615523Z" level=info msg="RemoveContainer for \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\" returns successfully" Jun 25 18:42:38.529891 kubelet[2512]: I0625 18:42:38.529847 2512 scope.go:117] "RemoveContainer" containerID="99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8" Jun 25 18:42:38.530736 containerd[1443]: time="2024-06-25T18:42:38.530712080Z" level=info msg="RemoveContainer for \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\"" Jun 25 18:42:38.613686 containerd[1443]: time="2024-06-25T18:42:38.613557172Z" level=info msg="RemoveContainer for \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\" returns successfully" Jun 25 18:42:38.613894 kubelet[2512]: I0625 18:42:38.613851 2512 scope.go:117] "RemoveContainer" containerID="68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8" Jun 25 18:42:38.615007 containerd[1443]: time="2024-06-25T18:42:38.614972070Z" level=info msg="RemoveContainer for \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\"" Jun 25 18:42:38.618391 containerd[1443]: time="2024-06-25T18:42:38.618337664Z" level=info msg="RemoveContainer for \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\" returns successfully" Jun 25 18:42:38.618519 kubelet[2512]: I0625 18:42:38.618487 2512 scope.go:117] "RemoveContainer" containerID="d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8" Jun 25 18:42:38.618695 containerd[1443]: time="2024-06-25T18:42:38.618655755Z" level=error msg="ContainerStatus for \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\": not found" Jun 25 18:42:38.618836 kubelet[2512]: E0625 18:42:38.618814 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\": not found" containerID="d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8" Jun 25 18:42:38.618876 kubelet[2512]: I0625 18:42:38.618858 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8"} err="failed to get container status \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d52a7e418f0d96213c3d109a91dfe5cc972d14020b8f8460e23a6405dfe60ee8\": not found" Jun 25 18:42:38.618876 kubelet[2512]: I0625 18:42:38.618869 2512 scope.go:117] "RemoveContainer" containerID="74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b" Jun 25 18:42:38.619046 containerd[1443]: time="2024-06-25T18:42:38.619015547Z" level=error msg="ContainerStatus for \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\": not found" Jun 25 18:42:38.619269 kubelet[2512]: E0625 18:42:38.619233 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\": not found" containerID="74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b" Jun 25 18:42:38.619328 kubelet[2512]: I0625 18:42:38.619297 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b"} err="failed to get container status \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\": rpc error: code = NotFound desc = an error occurred when try to find container \"74699f1da147cddcf9f8405e9648f3736754c3d0fa046273d7e2fdca9129908b\": not found" Jun 25 18:42:38.619362 kubelet[2512]: I0625 18:42:38.619331 2512 scope.go:117] "RemoveContainer" containerID="fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39" Jun 25 18:42:38.619552 containerd[1443]: time="2024-06-25T18:42:38.619519881Z" level=error msg="ContainerStatus for \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\": not found" Jun 25 18:42:38.619713 kubelet[2512]: E0625 18:42:38.619690 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\": not found" containerID="fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39" Jun 25 18:42:38.619778 kubelet[2512]: I0625 18:42:38.619734 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39"} err="failed to get container status \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe8b535a31c3d846c8dacf2d5b7f3800835ab8bf9910ea808e40e0187fb2ed39\": not found" Jun 25 18:42:38.619778 kubelet[2512]: I0625 18:42:38.619751 2512 scope.go:117] "RemoveContainer" containerID="99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8" Jun 25 18:42:38.619979 containerd[1443]: time="2024-06-25T18:42:38.619943063Z" level=error msg="ContainerStatus for \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\": not found" Jun 25 18:42:38.620138 kubelet[2512]: E0625 18:42:38.620118 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\": not found" containerID="99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8" Jun 25 18:42:38.620191 kubelet[2512]: I0625 18:42:38.620145 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8"} err="failed to get container status \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"99eb8b5761f90b721332e900c6662df5a8c13fcc5815694f1c86aa95a2016cb8\": not found" Jun 25 18:42:38.620191 kubelet[2512]: I0625 18:42:38.620156 2512 scope.go:117] "RemoveContainer" containerID="68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8" Jun 25 18:42:38.620343 containerd[1443]: time="2024-06-25T18:42:38.620299178Z" level=error msg="ContainerStatus for \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\": not found" Jun 25 18:42:38.620442 kubelet[2512]: E0625 18:42:38.620424 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\": not found" containerID="68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8" Jun 25 18:42:38.620483 kubelet[2512]: I0625 18:42:38.620454 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8"} err="failed to get container status \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"68520eca74cdb05c02f70e0d89aa3eac1b3331f0ec9c3522c8e906e339d282d8\": not found" Jun 25 18:42:39.176376 sshd[4176]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:39.184277 systemd[1]: sshd@26-10.0.0.103:22-10.0.0.1:59952.service: Deactivated successfully. Jun 25 18:42:39.186256 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:42:39.188023 systemd-logind[1428]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:42:39.193798 systemd[1]: Started sshd@27-10.0.0.103:22-10.0.0.1:44312.service - OpenSSH per-connection server daemon (10.0.0.1:44312). Jun 25 18:42:39.195004 systemd-logind[1428]: Removed session 27. Jun 25 18:42:39.228706 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 44312 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:39.230345 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:39.234605 systemd-logind[1428]: New session 28 of user core. Jun 25 18:42:39.244748 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:42:39.741184 sshd[4338]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:39.750936 kubelet[2512]: I0625 18:42:39.750865 2512 topology_manager.go:215] "Topology Admit Handler" podUID="5a2ff707-7c2b-4512-89fb-ad85ef86816d" podNamespace="kube-system" podName="cilium-8fgpl" Jun 25 18:42:39.752852 kubelet[2512]: E0625 18:42:39.752821 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13770004-3c48-4ed4-9d3c-2bfd60173d34" containerName="cilium-operator" Jun 25 18:42:39.752906 kubelet[2512]: E0625 18:42:39.752857 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e488bb2-53dd-47c1-b0d1-f8f946f7269e" containerName="apply-sysctl-overwrites" Jun 25 18:42:39.752906 kubelet[2512]: E0625 18:42:39.752869 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e488bb2-53dd-47c1-b0d1-f8f946f7269e" containerName="mount-cgroup" Jun 25 18:42:39.752906 kubelet[2512]: E0625 18:42:39.752880 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e488bb2-53dd-47c1-b0d1-f8f946f7269e" containerName="mount-bpf-fs" Jun 25 18:42:39.752906 kubelet[2512]: E0625 18:42:39.752889 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e488bb2-53dd-47c1-b0d1-f8f946f7269e" containerName="clean-cilium-state" Jun 25 18:42:39.752906 kubelet[2512]: E0625 18:42:39.752897 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e488bb2-53dd-47c1-b0d1-f8f946f7269e" containerName="cilium-agent" Jun 25 18:42:39.753025 kubelet[2512]: I0625 18:42:39.752931 2512 memory_manager.go:346] "RemoveStaleState removing state" podUID="13770004-3c48-4ed4-9d3c-2bfd60173d34" containerName="cilium-operator" Jun 25 18:42:39.753025 kubelet[2512]: I0625 18:42:39.752943 2512 memory_manager.go:346] "RemoveStaleState removing state" podUID="4e488bb2-53dd-47c1-b0d1-f8f946f7269e" containerName="cilium-agent" Jun 25 18:42:39.756100 systemd[1]: sshd@27-10.0.0.103:22-10.0.0.1:44312.service: Deactivated successfully. Jun 25 18:42:39.758122 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:42:39.766563 systemd-logind[1428]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:42:39.777152 systemd[1]: Started sshd@28-10.0.0.103:22-10.0.0.1:44328.service - OpenSSH per-connection server daemon (10.0.0.1:44328). Jun 25 18:42:39.778664 systemd-logind[1428]: Removed session 28. Jun 25 18:42:39.785295 systemd[1]: Created slice kubepods-burstable-pod5a2ff707_7c2b_4512_89fb_ad85ef86816d.slice - libcontainer container kubepods-burstable-pod5a2ff707_7c2b_4512_89fb_ad85ef86816d.slice. Jun 25 18:42:39.813424 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 44328 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:39.814984 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:39.819023 systemd-logind[1428]: New session 29 of user core. Jun 25 18:42:39.823645 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 18:42:39.875132 sshd[4351]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:39.884432 systemd[1]: sshd@28-10.0.0.103:22-10.0.0.1:44328.service: Deactivated successfully. Jun 25 18:42:39.886265 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 18:42:39.887969 systemd-logind[1428]: Session 29 logged out. Waiting for processes to exit. Jun 25 18:42:39.898939 systemd[1]: Started sshd@29-10.0.0.103:22-10.0.0.1:44342.service - OpenSSH per-connection server daemon (10.0.0.1:44342). Jun 25 18:42:39.899886 systemd-logind[1428]: Removed session 29. Jun 25 18:42:39.931075 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 44342 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:42:39.932459 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:42:39.936393 systemd-logind[1428]: New session 30 of user core. Jun 25 18:42:39.944620 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 18:42:39.951343 kubelet[2512]: I0625 18:42:39.951316 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptcls\" (UniqueName: \"kubernetes.io/projected/5a2ff707-7c2b-4512-89fb-ad85ef86816d-kube-api-access-ptcls\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951343 kubelet[2512]: I0625 18:42:39.951358 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-bpf-maps\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951440 kubelet[2512]: I0625 18:42:39.951379 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5a2ff707-7c2b-4512-89fb-ad85ef86816d-cilium-ipsec-secrets\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951440 kubelet[2512]: I0625 18:42:39.951401 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-host-proc-sys-kernel\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951550 kubelet[2512]: I0625 18:42:39.951488 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-hostproc\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951690 kubelet[2512]: I0625 18:42:39.951586 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-xtables-lock\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951690 kubelet[2512]: I0625 18:42:39.951656 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-cilium-cgroup\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951741 kubelet[2512]: I0625 18:42:39.951698 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-etc-cni-netd\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951741 kubelet[2512]: I0625 18:42:39.951734 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a2ff707-7c2b-4512-89fb-ad85ef86816d-cilium-config-path\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951784 kubelet[2512]: I0625 18:42:39.951762 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a2ff707-7c2b-4512-89fb-ad85ef86816d-hubble-tls\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951784 kubelet[2512]: I0625 18:42:39.951780 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-host-proc-sys-net\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951834 kubelet[2512]: I0625 18:42:39.951823 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-cni-path\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951866 kubelet[2512]: I0625 18:42:39.951846 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-cilium-run\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951866 kubelet[2512]: I0625 18:42:39.951863 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a2ff707-7c2b-4512-89fb-ad85ef86816d-lib-modules\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:39.951913 kubelet[2512]: I0625 18:42:39.951885 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a2ff707-7c2b-4512-89fb-ad85ef86816d-clustermesh-secrets\") pod \"cilium-8fgpl\" (UID: \"5a2ff707-7c2b-4512-89fb-ad85ef86816d\") " pod="kube-system/cilium-8fgpl" Jun 25 18:42:40.054628 kubelet[2512]: I0625 18:42:40.054379 2512 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="13770004-3c48-4ed4-9d3c-2bfd60173d34" path="/var/lib/kubelet/pods/13770004-3c48-4ed4-9d3c-2bfd60173d34/volumes" Jun 25 18:42:40.055653 kubelet[2512]: I0625 18:42:40.055453 2512 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4e488bb2-53dd-47c1-b0d1-f8f946f7269e" path="/var/lib/kubelet/pods/4e488bb2-53dd-47c1-b0d1-f8f946f7269e/volumes" Jun 25 18:42:40.090284 kubelet[2512]: E0625 18:42:40.090204 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:40.090725 containerd[1443]: time="2024-06-25T18:42:40.090686949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8fgpl,Uid:5a2ff707-7c2b-4512-89fb-ad85ef86816d,Namespace:kube-system,Attempt:0,}" Jun 25 18:42:40.111909 containerd[1443]: time="2024-06-25T18:42:40.111756553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:42:40.112016 containerd[1443]: time="2024-06-25T18:42:40.111937335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:42:40.112016 containerd[1443]: time="2024-06-25T18:42:40.111978093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:42:40.112016 containerd[1443]: time="2024-06-25T18:42:40.111995126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:42:40.134649 systemd[1]: Started cri-containerd-d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce.scope - libcontainer container d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce. Jun 25 18:42:40.161663 containerd[1443]: time="2024-06-25T18:42:40.161607642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8fgpl,Uid:5a2ff707-7c2b-4512-89fb-ad85ef86816d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\"" Jun 25 18:42:40.162475 kubelet[2512]: E0625 18:42:40.162447 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:40.165229 containerd[1443]: time="2024-06-25T18:42:40.165177080Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:42:40.179103 containerd[1443]: time="2024-06-25T18:42:40.179038589Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b8749a5d7a0819cc8c9e5ffec91bb6b3ed80a1f113da3b0cbb7a6831b03210f\"" Jun 25 18:42:40.179690 containerd[1443]: time="2024-06-25T18:42:40.179646171Z" level=info msg="StartContainer for \"5b8749a5d7a0819cc8c9e5ffec91bb6b3ed80a1f113da3b0cbb7a6831b03210f\"" Jun 25 18:42:40.208655 systemd[1]: Started cri-containerd-5b8749a5d7a0819cc8c9e5ffec91bb6b3ed80a1f113da3b0cbb7a6831b03210f.scope - libcontainer container 5b8749a5d7a0819cc8c9e5ffec91bb6b3ed80a1f113da3b0cbb7a6831b03210f. Jun 25 18:42:40.234077 containerd[1443]: time="2024-06-25T18:42:40.234024633Z" level=info msg="StartContainer for \"5b8749a5d7a0819cc8c9e5ffec91bb6b3ed80a1f113da3b0cbb7a6831b03210f\" returns successfully" Jun 25 18:42:40.245114 systemd[1]: cri-containerd-5b8749a5d7a0819cc8c9e5ffec91bb6b3ed80a1f113da3b0cbb7a6831b03210f.scope: Deactivated successfully. Jun 25 18:42:40.269619 kubelet[2512]: E0625 18:42:40.269577 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:40.285689 containerd[1443]: time="2024-06-25T18:42:40.285613988Z" level=info msg="shim disconnected" id=5b8749a5d7a0819cc8c9e5ffec91bb6b3ed80a1f113da3b0cbb7a6831b03210f namespace=k8s.io Jun 25 18:42:40.285689 containerd[1443]: time="2024-06-25T18:42:40.285682357Z" level=warning msg="cleaning up after shim disconnected" id=5b8749a5d7a0819cc8c9e5ffec91bb6b3ed80a1f113da3b0cbb7a6831b03210f namespace=k8s.io Jun 25 18:42:40.285689 containerd[1443]: time="2024-06-25T18:42:40.285694971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:42:41.271294 kubelet[2512]: E0625 18:42:41.271266 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:41.272782 containerd[1443]: time="2024-06-25T18:42:41.272737661Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:42:41.305411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340319575.mount: Deactivated successfully. Jun 25 18:42:41.346533 containerd[1443]: time="2024-06-25T18:42:41.346447907Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253\"" Jun 25 18:42:41.347182 containerd[1443]: time="2024-06-25T18:42:41.347100445Z" level=info msg="StartContainer for \"c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253\"" Jun 25 18:42:41.377631 systemd[1]: Started cri-containerd-c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253.scope - libcontainer container c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253. Jun 25 18:42:41.409332 systemd[1]: cri-containerd-c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253.scope: Deactivated successfully. Jun 25 18:42:41.438580 containerd[1443]: time="2024-06-25T18:42:41.438534364Z" level=info msg="StartContainer for \"c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253\" returns successfully" Jun 25 18:42:41.486746 containerd[1443]: time="2024-06-25T18:42:41.486682920Z" level=info msg="shim disconnected" id=c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253 namespace=k8s.io Jun 25 18:42:41.486746 containerd[1443]: time="2024-06-25T18:42:41.486737343Z" level=warning msg="cleaning up after shim disconnected" id=c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253 namespace=k8s.io Jun 25 18:42:41.486746 containerd[1443]: time="2024-06-25T18:42:41.486747983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:42:42.051980 kubelet[2512]: E0625 18:42:42.051937 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:42.060181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c65fe63dd95f8f1d2bab55cea8125f169596f3c1f4cc95ae202d65ddc3c31253-rootfs.mount: Deactivated successfully. Jun 25 18:42:42.130225 kubelet[2512]: E0625 18:42:42.130183 2512 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:42:42.277443 kubelet[2512]: E0625 18:42:42.277406 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:42.291658 containerd[1443]: time="2024-06-25T18:42:42.291599499Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:42:42.382904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062737577.mount: Deactivated successfully. Jun 25 18:42:42.384429 containerd[1443]: time="2024-06-25T18:42:42.384383360Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd\"" Jun 25 18:42:42.385052 containerd[1443]: time="2024-06-25T18:42:42.384944546Z" level=info msg="StartContainer for \"9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd\"" Jun 25 18:42:42.418688 systemd[1]: Started cri-containerd-9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd.scope - libcontainer container 9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd. Jun 25 18:42:42.447997 containerd[1443]: time="2024-06-25T18:42:42.447933113Z" level=info msg="StartContainer for \"9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd\" returns successfully" Jun 25 18:42:42.448917 systemd[1]: cri-containerd-9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd.scope: Deactivated successfully. Jun 25 18:42:42.475739 containerd[1443]: time="2024-06-25T18:42:42.475663568Z" level=info msg="shim disconnected" id=9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd namespace=k8s.io Jun 25 18:42:42.475739 containerd[1443]: time="2024-06-25T18:42:42.475719153Z" level=warning msg="cleaning up after shim disconnected" id=9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd namespace=k8s.io Jun 25 18:42:42.475739 containerd[1443]: time="2024-06-25T18:42:42.475728642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:42:43.059816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bfa94b235342f3f84377595fd0bc80a4c91fe3a5ad4360f2c4e7b85e0bc95fd-rootfs.mount: Deactivated successfully. Jun 25 18:42:43.281032 kubelet[2512]: E0625 18:42:43.280994 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:43.283148 containerd[1443]: time="2024-06-25T18:42:43.283098679Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:42:43.302654 containerd[1443]: time="2024-06-25T18:42:43.302600265Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a\"" Jun 25 18:42:43.303272 containerd[1443]: time="2024-06-25T18:42:43.303227506Z" level=info msg="StartContainer for \"5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a\"" Jun 25 18:42:43.341692 systemd[1]: Started cri-containerd-5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a.scope - libcontainer container 5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a. Jun 25 18:42:43.364991 systemd[1]: cri-containerd-5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a.scope: Deactivated successfully. Jun 25 18:42:43.366949 containerd[1443]: time="2024-06-25T18:42:43.366907445Z" level=info msg="StartContainer for \"5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a\" returns successfully" Jun 25 18:42:43.389350 containerd[1443]: time="2024-06-25T18:42:43.389287869Z" level=info msg="shim disconnected" id=5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a namespace=k8s.io Jun 25 18:42:43.389350 containerd[1443]: time="2024-06-25T18:42:43.389346531Z" level=warning msg="cleaning up after shim disconnected" id=5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a namespace=k8s.io Jun 25 18:42:43.389590 containerd[1443]: time="2024-06-25T18:42:43.389357231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:42:44.059694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b5ac39d8ff378960f2c7f89618f907f1c887b5becc94c4acfbae70bfbe6350a-rootfs.mount: Deactivated successfully. Jun 25 18:42:44.210733 kubelet[2512]: I0625 18:42:44.210685 2512 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-25T18:42:44Z","lastTransitionTime":"2024-06-25T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 25 18:42:44.285194 kubelet[2512]: E0625 18:42:44.285162 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:44.288052 containerd[1443]: time="2024-06-25T18:42:44.287998765Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:42:44.304594 containerd[1443]: time="2024-06-25T18:42:44.304542912Z" level=info msg="CreateContainer within sandbox \"d69b03f9ce3969e163648cded69f42fbff33ce6530011b5f5b202d7ca700e2ce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3d3bed9be8dfec6a5d375c31a9ffed3f053575cc4c1460da5c24018a6f9d8b4f\"" Jun 25 18:42:44.305060 containerd[1443]: time="2024-06-25T18:42:44.305002125Z" level=info msg="StartContainer for \"3d3bed9be8dfec6a5d375c31a9ffed3f053575cc4c1460da5c24018a6f9d8b4f\"" Jun 25 18:42:44.337682 systemd[1]: Started cri-containerd-3d3bed9be8dfec6a5d375c31a9ffed3f053575cc4c1460da5c24018a6f9d8b4f.scope - libcontainer container 3d3bed9be8dfec6a5d375c31a9ffed3f053575cc4c1460da5c24018a6f9d8b4f. Jun 25 18:42:44.368559 containerd[1443]: time="2024-06-25T18:42:44.368472772Z" level=info msg="StartContainer for \"3d3bed9be8dfec6a5d375c31a9ffed3f053575cc4c1460da5c24018a6f9d8b4f\" returns successfully" Jun 25 18:42:44.824549 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 25 18:42:45.289274 kubelet[2512]: E0625 18:42:45.289137 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:46.290587 kubelet[2512]: E0625 18:42:46.290561 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:47.886004 systemd-networkd[1368]: lxc_health: Link UP Jun 25 18:42:47.891990 systemd-networkd[1368]: lxc_health: Gained carrier Jun 25 18:42:48.094064 kubelet[2512]: E0625 18:42:48.094017 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:48.112540 kubelet[2512]: I0625 18:42:48.111128 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8fgpl" podStartSLOduration=9.111095384 podCreationTimestamp="2024-06-25 18:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:42:45.301016445 +0000 UTC m=+93.345918730" watchObservedRunningTime="2024-06-25 18:42:48.111095384 +0000 UTC m=+96.155997659" Jun 25 18:42:48.297562 kubelet[2512]: E0625 18:42:48.297349 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:48.422872 kubelet[2512]: E0625 18:42:48.422839 2512 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35218->127.0.0.1:37935: write tcp 127.0.0.1:35218->127.0.0.1:37935: write: broken pipe Jun 25 18:42:49.301411 kubelet[2512]: E0625 18:42:49.301364 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:42:49.452603 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jun 25 18:42:54.755987 sshd[4361]: pam_unix(sshd:session): session closed for user core Jun 25 18:42:54.759834 systemd[1]: sshd@29-10.0.0.103:22-10.0.0.1:44342.service: Deactivated successfully. Jun 25 18:42:54.761960 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 18:42:54.764186 systemd-logind[1428]: Session 30 logged out. Waiting for processes to exit. Jun 25 18:42:54.765410 systemd-logind[1428]: Removed session 30. Jun 25 18:42:55.051900 kubelet[2512]: E0625 18:42:55.051758 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"