Jun 25 18:43:48.904648 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:43:48.904669 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:48.904680 kernel: BIOS-provided physical RAM map: Jun 25 18:43:48.904687 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:43:48.904693 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:43:48.904699 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:43:48.904707 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jun 25 18:43:48.904715 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jun 25 18:43:48.904721 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 18:43:48.904729 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:43:48.904736 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 25 18:43:48.904742 kernel: NX (Execute Disable) protection: active Jun 25 18:43:48.904748 kernel: APIC: Static calls initialized Jun 25 18:43:48.904754 kernel: SMBIOS 2.8 present. Jun 25 18:43:48.904762 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 25 18:43:48.904771 kernel: Hypervisor detected: KVM Jun 25 18:43:48.904778 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:43:48.904784 kernel: kvm-clock: using sched offset of 2276312899 cycles Jun 25 18:43:48.904791 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:43:48.904798 kernel: tsc: Detected 2794.750 MHz processor Jun 25 18:43:48.904805 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:43:48.904813 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:43:48.904819 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jun 25 18:43:48.904826 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:43:48.904836 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:43:48.904843 kernel: Using GB pages for direct mapping Jun 25 18:43:48.904849 kernel: ACPI: Early table checksum verification disabled Jun 25 18:43:48.904856 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jun 25 18:43:48.904863 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:48.904870 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:48.904877 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:48.904884 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 25 18:43:48.904891 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:48.904900 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:48.904907 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:48.904914 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jun 25 18:43:48.904921 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jun 25 18:43:48.904927 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 25 18:43:48.904934 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jun 25 18:43:48.904941 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jun 25 18:43:48.904969 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jun 25 18:43:48.904979 kernel: No NUMA configuration found Jun 25 18:43:48.904986 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jun 25 18:43:48.904993 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jun 25 18:43:48.905000 kernel: Zone ranges: Jun 25 18:43:48.905008 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:43:48.905015 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jun 25 18:43:48.905025 kernel: Normal empty Jun 25 18:43:48.905032 kernel: Movable zone start for each node Jun 25 18:43:48.905039 kernel: Early memory node ranges Jun 25 18:43:48.905046 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:43:48.905053 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jun 25 18:43:48.905061 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jun 25 18:43:48.905068 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:43:48.905075 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:43:48.905082 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jun 25 18:43:48.905091 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 18:43:48.905099 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:43:48.905106 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:43:48.905113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 18:43:48.905120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:43:48.905127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:43:48.905141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:43:48.905148 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:43:48.905156 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:43:48.905163 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:43:48.905173 kernel: TSC deadline timer available Jun 25 18:43:48.905180 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 18:43:48.905187 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:43:48.905194 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 18:43:48.905201 kernel: kvm-guest: setup PV sched yield Jun 25 18:43:48.905208 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jun 25 18:43:48.905215 kernel: Booting paravirtualized kernel on KVM Jun 25 18:43:48.905223 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:43:48.905230 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 18:43:48.905240 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jun 25 18:43:48.905247 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jun 25 18:43:48.905254 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 18:43:48.905261 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:43:48.905268 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:43:48.905276 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:48.905284 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:43:48.905291 kernel: random: crng init done Jun 25 18:43:48.905301 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:43:48.905308 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:43:48.905315 kernel: Fallback order for Node 0: 0 Jun 25 18:43:48.905323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jun 25 18:43:48.905330 kernel: Policy zone: DMA32 Jun 25 18:43:48.905337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:43:48.905344 kernel: Memory: 2428448K/2571756K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 143048K reserved, 0K cma-reserved) Jun 25 18:43:48.905352 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:43:48.905359 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:43:48.905368 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:43:48.905376 kernel: Dynamic Preempt: voluntary Jun 25 18:43:48.905383 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:43:48.905390 kernel: rcu: RCU event tracing is enabled. Jun 25 18:43:48.905398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:43:48.905405 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:43:48.905412 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:43:48.905420 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:43:48.905427 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:43:48.905436 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:43:48.905444 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 18:43:48.905451 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:43:48.905458 kernel: Console: colour VGA+ 80x25 Jun 25 18:43:48.905465 kernel: printk: console [ttyS0] enabled Jun 25 18:43:48.905472 kernel: ACPI: Core revision 20230628 Jun 25 18:43:48.905479 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 18:43:48.905487 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:43:48.905494 kernel: x2apic enabled Jun 25 18:43:48.905503 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:43:48.905510 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 25 18:43:48.905518 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 25 18:43:48.905525 kernel: kvm-guest: setup PV IPIs Jun 25 18:43:48.905532 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 18:43:48.905539 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 18:43:48.905546 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 18:43:48.905554 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 18:43:48.905571 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 18:43:48.905578 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 18:43:48.905586 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:43:48.905601 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:43:48.905613 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:43:48.905620 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:43:48.905628 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 18:43:48.905642 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 18:43:48.905657 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 18:43:48.905679 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 18:43:48.905693 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 25 18:43:48.905702 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 25 18:43:48.905716 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 25 18:43:48.905730 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:43:48.905738 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:43:48.905746 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:43:48.905753 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:43:48.905763 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 18:43:48.905771 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:43:48.905778 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:43:48.905785 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:43:48.905793 kernel: SELinux: Initializing. Jun 25 18:43:48.905800 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:43:48.905808 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:43:48.905816 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 18:43:48.905823 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:48.905834 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:48.905841 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:48.905849 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 18:43:48.905856 kernel: ... version: 0 Jun 25 18:43:48.905864 kernel: ... bit width: 48 Jun 25 18:43:48.905871 kernel: ... generic registers: 6 Jun 25 18:43:48.905879 kernel: ... value mask: 0000ffffffffffff Jun 25 18:43:48.905886 kernel: ... max period: 00007fffffffffff Jun 25 18:43:48.905894 kernel: ... fixed-purpose events: 0 Jun 25 18:43:48.905904 kernel: ... event mask: 000000000000003f Jun 25 18:43:48.905911 kernel: signal: max sigframe size: 1776 Jun 25 18:43:48.905918 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:43:48.905926 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:43:48.905934 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:43:48.905941 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:43:48.905961 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 18:43:48.905968 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:43:48.905976 kernel: smpboot: Max logical packages: 1 Jun 25 18:43:48.905986 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 18:43:48.905993 kernel: devtmpfs: initialized Jun 25 18:43:48.906001 kernel: x86/mm: Memory block size: 128MB Jun 25 18:43:48.906009 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:43:48.906016 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:43:48.906024 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:43:48.906031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:43:48.906039 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:43:48.906046 kernel: audit: type=2000 audit(1719341028.184:1): state=initialized audit_enabled=0 res=1 Jun 25 18:43:48.906056 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:43:48.906064 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:43:48.906071 kernel: cpuidle: using governor menu Jun 25 18:43:48.906079 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:43:48.906086 kernel: dca service started, version 1.12.1 Jun 25 18:43:48.906094 kernel: PCI: Using configuration type 1 for base access Jun 25 18:43:48.906101 kernel: PCI: Using configuration type 1 for extended access Jun 25 18:43:48.906109 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:43:48.906117 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:43:48.906126 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:43:48.906140 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:43:48.906148 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:43:48.906156 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:43:48.906163 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:43:48.906171 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:43:48.906178 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:43:48.906186 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:43:48.906193 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:43:48.906204 kernel: ACPI: Interpreter enabled Jun 25 18:43:48.906211 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 18:43:48.906218 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:43:48.906226 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:43:48.906234 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:43:48.906241 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 18:43:48.906249 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:43:48.906420 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:43:48.906436 kernel: acpiphp: Slot [3] registered Jun 25 18:43:48.906444 kernel: acpiphp: Slot [4] registered Jun 25 18:43:48.906451 kernel: acpiphp: Slot [5] registered Jun 25 18:43:48.906458 kernel: acpiphp: Slot [6] registered Jun 25 18:43:48.906466 kernel: acpiphp: Slot [7] registered Jun 25 18:43:48.906473 kernel: acpiphp: Slot [8] registered Jun 25 18:43:48.906480 kernel: acpiphp: Slot [9] registered Jun 25 18:43:48.906488 kernel: acpiphp: Slot [10] registered Jun 25 18:43:48.906495 kernel: acpiphp: Slot [11] registered Jun 25 18:43:48.906503 kernel: acpiphp: Slot [12] registered Jun 25 18:43:48.906513 kernel: acpiphp: Slot [13] registered Jun 25 18:43:48.906520 kernel: acpiphp: Slot [14] registered Jun 25 18:43:48.906527 kernel: acpiphp: Slot [15] registered Jun 25 18:43:48.906535 kernel: acpiphp: Slot [16] registered Jun 25 18:43:48.906542 kernel: acpiphp: Slot [17] registered Jun 25 18:43:48.906549 kernel: acpiphp: Slot [18] registered Jun 25 18:43:48.906557 kernel: acpiphp: Slot [19] registered Jun 25 18:43:48.906564 kernel: acpiphp: Slot [20] registered Jun 25 18:43:48.906572 kernel: acpiphp: Slot [21] registered Jun 25 18:43:48.906581 kernel: acpiphp: Slot [22] registered Jun 25 18:43:48.906589 kernel: acpiphp: Slot [23] registered Jun 25 18:43:48.906596 kernel: acpiphp: Slot [24] registered Jun 25 18:43:48.906604 kernel: acpiphp: Slot [25] registered Jun 25 18:43:48.906611 kernel: acpiphp: Slot [26] registered Jun 25 18:43:48.906618 kernel: acpiphp: Slot [27] registered Jun 25 18:43:48.906625 kernel: acpiphp: Slot [28] registered Jun 25 18:43:48.906633 kernel: acpiphp: Slot [29] registered Jun 25 18:43:48.906640 kernel: acpiphp: Slot [30] registered Jun 25 18:43:48.906647 kernel: acpiphp: Slot [31] registered Jun 25 18:43:48.906658 kernel: PCI host bridge to bus 0000:00 Jun 25 18:43:48.906790 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:43:48.906991 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:43:48.907121 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:43:48.907269 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 18:43:48.907397 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 18:43:48.907507 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:43:48.907652 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:43:48.907784 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:43:48.907912 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 18:43:48.908056 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 18:43:48.908185 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 18:43:48.908305 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 18:43:48.908430 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 18:43:48.908550 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 18:43:48.908723 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 18:43:48.908874 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 18:43:48.909029 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 18:43:48.909166 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 18:43:48.909292 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jun 25 18:43:48.909411 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jun 25 18:43:48.909559 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jun 25 18:43:48.909732 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:43:48.909876 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:43:48.910012 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 18:43:48.910142 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jun 25 18:43:48.910269 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jun 25 18:43:48.910397 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 18:43:48.910516 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 18:43:48.910635 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jun 25 18:43:48.910754 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jun 25 18:43:48.910883 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 18:43:48.911075 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 18:43:48.911212 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jun 25 18:43:48.911330 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 25 18:43:48.911447 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jun 25 18:43:48.911457 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:43:48.911465 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:43:48.911473 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:43:48.911480 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:43:48.911488 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:43:48.911496 kernel: iommu: Default domain type: Translated Jun 25 18:43:48.911507 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:43:48.911515 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:43:48.911522 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:43:48.911530 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:43:48.911537 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jun 25 18:43:48.911655 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 18:43:48.911777 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 18:43:48.911893 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:43:48.911907 kernel: vgaarb: loaded Jun 25 18:43:48.911915 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 18:43:48.911922 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 18:43:48.911930 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:43:48.911937 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:43:48.911959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:43:48.911971 kernel: pnp: PnP ACPI init Jun 25 18:43:48.912101 kernel: pnp 00:02: [dma 2] Jun 25 18:43:48.912116 kernel: pnp: PnP ACPI: found 6 devices Jun 25 18:43:48.912124 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:43:48.912139 kernel: NET: Registered PF_INET protocol family Jun 25 18:43:48.912147 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:43:48.912155 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:43:48.912162 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:43:48.912170 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:43:48.912178 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:43:48.912186 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:43:48.912196 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:43:48.912204 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:43:48.912211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:43:48.912219 kernel: NET: Registered PF_XDP protocol family Jun 25 18:43:48.912329 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:43:48.912437 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:43:48.912546 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:43:48.912655 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 18:43:48.912763 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 18:43:48.912896 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 18:43:48.913067 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:43:48.913079 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:43:48.913086 kernel: Initialise system trusted keyrings Jun 25 18:43:48.913094 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:43:48.913102 kernel: Key type asymmetric registered Jun 25 18:43:48.913109 kernel: Asymmetric key parser 'x509' registered Jun 25 18:43:48.913117 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:43:48.913128 kernel: io scheduler mq-deadline registered Jun 25 18:43:48.913143 kernel: io scheduler kyber registered Jun 25 18:43:48.913151 kernel: io scheduler bfq registered Jun 25 18:43:48.913159 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:43:48.913167 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 18:43:48.913174 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 18:43:48.913182 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 18:43:48.913190 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:43:48.913197 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:43:48.913208 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:43:48.913215 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:43:48.913223 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:43:48.913347 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 18:43:48.913358 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:43:48.913468 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 18:43:48.913580 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T18:43:48 UTC (1719341028) Jun 25 18:43:48.913694 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 18:43:48.913707 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 18:43:48.913715 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:43:48.913723 kernel: Segment Routing with IPv6 Jun 25 18:43:48.913732 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:43:48.913740 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:43:48.913748 kernel: Key type dns_resolver registered Jun 25 18:43:48.913755 kernel: IPI shorthand broadcast: enabled Jun 25 18:43:48.913763 kernel: sched_clock: Marking stable (722002805, 153488375)->(891960265, -16469085) Jun 25 18:43:48.913770 kernel: registered taskstats version 1 Jun 25 18:43:48.913780 kernel: Loading compiled-in X.509 certificates Jun 25 18:43:48.913788 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:43:48.913796 kernel: Key type .fscrypt registered Jun 25 18:43:48.913803 kernel: Key type fscrypt-provisioning registered Jun 25 18:43:48.913810 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:43:48.913818 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:43:48.913825 kernel: ima: No architecture policies found Jun 25 18:43:48.913833 kernel: clk: Disabling unused clocks Jun 25 18:43:48.913840 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:43:48.913851 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:43:48.913858 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:43:48.913866 kernel: Run /init as init process Jun 25 18:43:48.913873 kernel: with arguments: Jun 25 18:43:48.913881 kernel: /init Jun 25 18:43:48.913888 kernel: with environment: Jun 25 18:43:48.913895 kernel: HOME=/ Jun 25 18:43:48.913919 kernel: TERM=linux Jun 25 18:43:48.913929 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:43:48.913941 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:43:48.913964 systemd[1]: Detected virtualization kvm. Jun 25 18:43:48.913972 systemd[1]: Detected architecture x86-64. Jun 25 18:43:48.913981 systemd[1]: Running in initrd. Jun 25 18:43:48.913989 systemd[1]: No hostname configured, using default hostname. Jun 25 18:43:48.913996 systemd[1]: Hostname set to . Jun 25 18:43:48.914008 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:43:48.914016 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:43:48.914025 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:48.914033 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:48.914042 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:43:48.914051 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:43:48.914059 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:43:48.914068 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:43:48.914080 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:43:48.914088 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:43:48.914097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:48.914105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:48.914113 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:48.914122 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:43:48.914137 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:43:48.914149 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:48.914157 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:43:48.914165 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:43:48.914174 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:43:48.914182 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:43:48.914191 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:48.914199 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:48.914207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:48.914216 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:48.914226 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:43:48.914235 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:43:48.914243 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:43:48.914251 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:43:48.914260 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:43:48.914270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:43:48.914279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:48.914287 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:43:48.914296 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:48.914304 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:43:48.914330 systemd-journald[192]: Collecting audit messages is disabled. Jun 25 18:43:48.914351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:43:48.914360 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:43:48.914368 systemd-journald[192]: Journal started Jun 25 18:43:48.914389 systemd-journald[192]: Runtime Journal (/run/log/journal/adc17cb9c92a401fa1d4c313f24d0799) is 6.0M, max 48.4M, 42.3M free. Jun 25 18:43:48.907337 systemd-modules-load[193]: Inserted module 'overlay' Jun 25 18:43:48.942830 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:43:48.942866 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:43:48.943332 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:48.946342 systemd-modules-load[193]: Inserted module 'br_netfilter' Jun 25 18:43:48.947322 kernel: Bridge firewalling registered Jun 25 18:43:48.947658 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:48.963219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:48.966822 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:43:48.969568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:43:48.974238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:43:48.980048 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:48.983741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:48.986191 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:48.997084 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:43:49.007096 dracut-cmdline[225]: dracut-dracut-053 Jun 25 18:43:49.009475 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:49.017740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:49.031090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:43:49.059839 systemd-resolved[253]: Positive Trust Anchors: Jun 25 18:43:49.059851 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:43:49.059881 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:43:49.062461 systemd-resolved[253]: Defaulting to hostname 'linux'. Jun 25 18:43:49.063473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:43:49.069916 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:49.093986 kernel: SCSI subsystem initialized Jun 25 18:43:49.103983 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:43:49.116970 kernel: iscsi: registered transport (tcp) Jun 25 18:43:49.143971 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:43:49.143991 kernel: QLogic iSCSI HBA Driver Jun 25 18:43:49.187963 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:43:49.195063 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:43:49.222146 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:43:49.222173 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:43:49.223205 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:43:49.269990 kernel: raid6: avx2x4 gen() 30212 MB/s Jun 25 18:43:49.286969 kernel: raid6: avx2x2 gen() 31231 MB/s Jun 25 18:43:49.304107 kernel: raid6: avx2x1 gen() 25643 MB/s Jun 25 18:43:49.304145 kernel: raid6: using algorithm avx2x2 gen() 31231 MB/s Jun 25 18:43:49.330970 kernel: raid6: .... xor() 19727 MB/s, rmw enabled Jun 25 18:43:49.331006 kernel: raid6: using avx2x2 recovery algorithm Jun 25 18:43:49.354986 kernel: xor: automatically using best checksumming function avx Jun 25 18:43:49.523983 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:43:49.536025 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:43:49.549059 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:49.560618 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jun 25 18:43:49.564731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:49.587104 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:43:49.602176 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jun 25 18:43:49.632152 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:43:49.646089 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:43:49.709664 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:49.715530 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:43:49.740027 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 18:43:49.755504 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:43:49.755666 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:43:49.755683 kernel: GPT:9289727 != 19775487 Jun 25 18:43:49.755698 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:43:49.755712 kernel: GPT:9289727 != 19775487 Jun 25 18:43:49.755723 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:43:49.755742 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:49.740365 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:43:49.742788 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:43:49.746295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:49.748088 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:43:49.762315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:43:49.774036 kernel: libata version 3.00 loaded. Jun 25 18:43:49.778069 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:43:49.780829 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 18:43:49.790464 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:43:49.790481 kernel: scsi host0: ata_piix Jun 25 18:43:49.790648 kernel: scsi host1: ata_piix Jun 25 18:43:49.790789 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 18:43:49.790800 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 18:43:49.786240 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:43:49.786358 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:49.790556 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:49.805213 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:43:49.792043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:49.792220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:49.812312 kernel: AES CTR mode by8 optimization enabled Jun 25 18:43:49.812330 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (462) Jun 25 18:43:49.812366 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Jun 25 18:43:49.796402 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:49.814217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:49.830826 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:43:49.862212 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:49.868470 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:43:49.873073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:43:49.876848 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:43:49.878167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:43:49.898070 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:43:49.899814 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:49.919965 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:49.946000 kernel: ata2: found unknown device (class 0) Jun 25 18:43:49.947033 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 18:43:49.948963 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 18:43:49.995019 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 18:43:50.012091 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:43:50.012121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:50.012132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:50.012143 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 18:43:50.012310 disk-uuid[546]: Primary Header is updated. Jun 25 18:43:50.012310 disk-uuid[546]: Secondary Entries is updated. Jun 25 18:43:50.012310 disk-uuid[546]: Secondary Header is updated. Jun 25 18:43:51.034526 disk-uuid[567]: The operation has completed successfully. Jun 25 18:43:51.035881 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:51.063489 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:43:51.063608 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:43:51.086164 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:43:51.089681 sh[583]: Success Jun 25 18:43:51.102979 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 18:43:51.135044 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:43:51.145581 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:43:51.149667 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:43:51.159517 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:43:51.159550 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:51.159561 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:43:51.160557 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:43:51.161308 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:43:51.165782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:43:51.166506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:43:51.174122 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:43:51.176396 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:43:51.185281 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:51.185322 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:51.185338 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:51.188978 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:51.197839 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:43:51.199903 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:51.210774 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:43:51.218153 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:43:51.274100 ignition[674]: Ignition 2.19.0 Jun 25 18:43:51.274111 ignition[674]: Stage: fetch-offline Jun 25 18:43:51.274157 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:51.274167 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:51.274274 ignition[674]: parsed url from cmdline: "" Jun 25 18:43:51.274279 ignition[674]: no config URL provided Jun 25 18:43:51.274285 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:43:51.274296 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:43:51.274328 ignition[674]: op(1): [started] loading QEMU firmware config module Jun 25 18:43:51.274335 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:43:51.285656 ignition[674]: op(1): [finished] loading QEMU firmware config module Jun 25 18:43:51.289653 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:43:51.298107 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:43:51.319608 systemd-networkd[772]: lo: Link UP Jun 25 18:43:51.319617 systemd-networkd[772]: lo: Gained carrier Jun 25 18:43:51.321232 systemd-networkd[772]: Enumeration completed Jun 25 18:43:51.321694 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:51.321699 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:51.321794 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:43:51.322858 systemd-networkd[772]: eth0: Link UP Jun 25 18:43:51.322862 systemd-networkd[772]: eth0: Gained carrier Jun 25 18:43:51.322871 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:51.323875 systemd[1]: Reached target network.target - Network. Jun 25 18:43:51.335176 ignition[674]: parsing config with SHA512: bf9e2f4304d52e163773b32c5a6ada2f5ed81c0037612a4e83ae1188a8adbc546bedb2655ce0600441dbcba92a32e26a62e51d5ef8adbde1054ce30b99e32512 Jun 25 18:43:51.339305 unknown[674]: fetched base config from "system" Jun 25 18:43:51.339617 unknown[674]: fetched user config from "qemu" Jun 25 18:43:51.340229 ignition[674]: fetch-offline: fetch-offline passed Jun 25 18:43:51.340303 ignition[674]: Ignition finished successfully Jun 25 18:43:51.342132 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:43:51.342534 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:43:51.344303 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:43:51.351086 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:43:51.365685 ignition[776]: Ignition 2.19.0 Jun 25 18:43:51.365703 ignition[776]: Stage: kargs Jun 25 18:43:51.365927 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:51.365954 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:51.367087 ignition[776]: kargs: kargs passed Jun 25 18:43:51.370743 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:43:51.367143 ignition[776]: Ignition finished successfully Jun 25 18:43:51.380098 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:43:51.391743 ignition[785]: Ignition 2.19.0 Jun 25 18:43:51.391753 ignition[785]: Stage: disks Jun 25 18:43:51.391904 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:51.391915 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:51.394579 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:43:51.392685 ignition[785]: disks: disks passed Jun 25 18:43:51.396510 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:43:51.392726 ignition[785]: Ignition finished successfully Jun 25 18:43:51.398403 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:43:51.399620 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:43:51.401220 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:51.403269 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:51.414088 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:43:51.426641 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:43:51.433380 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:43:51.453013 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:43:51.546963 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:43:51.546986 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:43:51.549144 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:43:51.562006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:43:51.564497 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:43:51.567233 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:43:51.567283 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:43:51.569160 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:43:51.570963 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Jun 25 18:43:51.574580 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:51.574602 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:51.574612 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:51.577964 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:51.579516 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:43:51.581402 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:43:51.584675 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:43:51.620815 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:43:51.624533 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:43:51.629141 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:43:51.633335 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:43:51.706367 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:43:51.718104 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:43:51.719982 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:43:51.730991 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:51.750281 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:43:51.758007 ignition[918]: INFO : Ignition 2.19.0 Jun 25 18:43:51.758007 ignition[918]: INFO : Stage: mount Jun 25 18:43:51.760035 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:51.760035 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:51.760035 ignition[918]: INFO : mount: mount passed Jun 25 18:43:51.760035 ignition[918]: INFO : Ignition finished successfully Jun 25 18:43:51.766185 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:43:51.778055 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:43:52.159495 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:43:52.172164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:43:52.193976 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (932) Jun 25 18:43:52.194013 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:52.196495 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:52.196522 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:52.198971 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:52.200500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:43:52.228930 ignition[949]: INFO : Ignition 2.19.0 Jun 25 18:43:52.228930 ignition[949]: INFO : Stage: files Jun 25 18:43:52.230708 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:52.230708 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:52.240179 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:43:52.241835 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:43:52.241835 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:43:52.246018 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:43:52.247589 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:43:52.249328 unknown[949]: wrote ssh authorized keys file for user: core Jun 25 18:43:52.250579 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:43:52.252889 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:43:52.254884 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:43:52.296974 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:43:52.375678 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:43:52.378324 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:43:52.378324 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 25 18:43:52.401131 systemd-networkd[772]: eth0: Gained IPv6LL Jun 25 18:43:52.844238 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:43:52.944281 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:43:52.944281 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:43:52.948402 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jun 25 18:43:53.243998 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:43:53.665461 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jun 25 18:43:53.665461 ignition[949]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 18:43:53.673044 ignition[949]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:43:53.675214 ignition[949]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:43:53.675214 ignition[949]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 18:43:53.675214 ignition[949]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 25 18:43:53.675214 ignition[949]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:43:53.675214 ignition[949]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:43:53.675214 ignition[949]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 25 18:43:53.675214 ignition[949]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:43:53.702061 ignition[949]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:43:53.706484 ignition[949]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:43:53.708446 ignition[949]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:43:53.708446 ignition[949]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:43:53.711761 ignition[949]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:43:53.713244 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:43:53.715053 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:43:53.716758 ignition[949]: INFO : files: files passed Jun 25 18:43:53.717646 ignition[949]: INFO : Ignition finished successfully Jun 25 18:43:53.720325 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:43:53.731146 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:43:53.734161 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:43:53.737407 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:43:53.737534 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:43:53.743449 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:43:53.746777 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:53.746777 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:53.750598 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:53.752543 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:43:53.755703 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:43:53.768090 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:43:53.795232 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:43:53.796309 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:43:53.798990 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:43:53.801149 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:43:53.803294 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:43:53.805481 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:43:53.822478 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:43:53.834081 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:43:53.844967 systemd[1]: Stopped target network.target - Network. Jun 25 18:43:53.846763 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:53.849125 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:53.851528 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:43:53.853408 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:43:53.854445 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:43:53.857046 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:43:53.859146 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:43:53.861001 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:43:53.863219 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:43:53.865550 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:43:53.867824 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:43:53.869969 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:43:53.872569 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:43:53.874748 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:43:53.876870 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:43:53.878528 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:43:53.879548 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:43:53.881817 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:53.884041 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:53.886509 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:43:53.887511 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:53.890153 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:43:53.891157 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:43:53.893398 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:43:53.894476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:43:53.896884 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:43:53.898693 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:43:53.903057 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:53.905899 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:43:53.907799 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:43:53.909735 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:43:53.910694 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:43:53.912811 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:43:53.913749 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:43:53.915856 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:43:53.917081 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:43:53.919718 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:43:53.920732 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:43:53.934117 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:43:53.936038 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:43:53.937127 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:53.940760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:43:53.943264 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:43:53.946171 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:43:53.947816 systemd-networkd[772]: eth0: DHCPv6 lease lost Jun 25 18:43:53.950155 ignition[1003]: INFO : Ignition 2.19.0 Jun 25 18:43:53.950155 ignition[1003]: INFO : Stage: umount Jun 25 18:43:53.950155 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:53.950155 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:53.949060 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:43:53.957503 ignition[1003]: INFO : umount: umount passed Jun 25 18:43:53.957503 ignition[1003]: INFO : Ignition finished successfully Jun 25 18:43:53.950172 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:53.956072 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:43:53.958394 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:43:53.966612 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:43:53.967819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:43:53.972421 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:43:53.973917 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:43:53.974987 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:43:53.977898 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:43:53.978926 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:43:53.983097 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:43:53.983210 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:43:53.988348 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:43:53.988396 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:53.991677 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:43:53.991745 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:43:53.994968 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:43:53.995043 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:43:53.998180 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:43:53.999214 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:43:54.001524 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:43:54.001581 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:43:54.015132 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:43:54.015215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:43:54.015282 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:43:54.021077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:43:54.021129 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:54.034943 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:43:54.035025 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:54.036374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:43:54.036433 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:54.036791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:54.050502 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:43:54.050642 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:43:54.063845 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:43:54.064049 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:54.067238 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:43:54.067285 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:54.068378 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:43:54.068415 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:54.068732 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:43:54.068778 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:43:54.069630 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:43:54.069673 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:43:54.088370 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:43:54.088424 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:54.104127 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:43:54.104207 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:43:54.104262 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:54.104636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:54.104691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:54.120781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:43:54.120983 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:43:54.141985 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:43:54.142142 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:43:54.143520 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:43:54.146164 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:43:54.146221 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:43:54.164222 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:43:54.172182 systemd[1]: Switching root. Jun 25 18:43:54.200586 systemd-journald[192]: Journal stopped Jun 25 18:43:55.324544 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jun 25 18:43:55.324606 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:43:55.324622 kernel: SELinux: policy capability open_perms=1 Jun 25 18:43:55.324638 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:43:55.324653 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:43:55.324664 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:43:55.324676 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:43:55.324687 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:43:55.324698 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:43:55.324715 kernel: audit: type=1403 audit(1719341034.566:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:43:55.324727 systemd[1]: Successfully loaded SELinux policy in 41.224ms. Jun 25 18:43:55.324744 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.745ms. Jun 25 18:43:55.324757 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:43:55.324772 systemd[1]: Detected virtualization kvm. Jun 25 18:43:55.324784 systemd[1]: Detected architecture x86-64. Jun 25 18:43:55.324796 systemd[1]: Detected first boot. Jun 25 18:43:55.324813 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:43:55.324825 zram_generator::config[1047]: No configuration found. Jun 25 18:43:55.324838 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:43:55.324850 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:43:55.324862 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:43:55.324882 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:43:55.324895 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:43:55.324907 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:43:55.324919 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:43:55.324936 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:43:55.324983 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:43:55.324996 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:43:55.325008 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:43:55.325023 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:43:55.325036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:55.325048 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:55.325062 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:43:55.325074 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:43:55.325086 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:43:55.325098 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:43:55.325110 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:43:55.325122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:55.325137 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:43:55.325149 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:43:55.325161 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:43:55.325173 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:43:55.325186 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:55.325198 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:43:55.325210 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:43:55.325222 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:43:55.325237 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:43:55.325249 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:43:55.325261 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:55.325273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:55.325285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:55.325297 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:43:55.325309 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:43:55.325323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:43:55.325335 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:43:55.325349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:55.325361 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:43:55.325373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:43:55.325385 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:43:55.325398 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:43:55.325411 systemd[1]: Reached target machines.target - Containers. Jun 25 18:43:55.325423 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:43:55.325435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:55.325450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:43:55.325462 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:43:55.325474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:55.325486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:43:55.325498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:55.325510 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:43:55.325522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:55.325534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:43:55.325547 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:43:55.325561 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:43:55.325573 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:43:55.325587 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:43:55.325599 kernel: fuse: init (API version 7.39) Jun 25 18:43:55.325610 kernel: loop: module loaded Jun 25 18:43:55.325621 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:43:55.325634 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:43:55.325646 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:43:55.325658 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:43:55.325688 systemd-journald[1123]: Collecting audit messages is disabled. Jun 25 18:43:55.325710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:43:55.325722 systemd-journald[1123]: Journal started Jun 25 18:43:55.325744 systemd-journald[1123]: Runtime Journal (/run/log/journal/adc17cb9c92a401fa1d4c313f24d0799) is 6.0M, max 48.4M, 42.3M free. Jun 25 18:43:55.099761 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:43:55.118606 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:43:55.119053 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:43:55.330272 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:43:55.330308 systemd[1]: Stopped verity-setup.service. Jun 25 18:43:55.330323 kernel: ACPI: bus type drm_connector registered Jun 25 18:43:55.332846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:55.340116 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:43:55.341241 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:43:55.342919 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:43:55.344650 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:43:55.346227 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:43:55.347966 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:43:55.349692 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:43:55.351489 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:43:55.353551 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:55.355731 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:43:55.356002 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:43:55.358188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:55.358435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:55.360468 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:43:55.360705 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:43:55.362812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:55.363078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:55.365231 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:43:55.365470 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:43:55.368736 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:55.368972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:55.370816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:55.372414 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:43:55.374236 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:43:55.393202 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:43:55.411080 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:43:55.414031 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:43:55.415317 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:43:55.415345 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:43:55.417657 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:43:55.420280 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:43:55.422557 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:43:55.423768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:55.426477 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:43:55.428919 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:43:55.430254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:43:55.431934 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:43:55.435044 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:43:55.436402 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:43:55.438413 systemd-journald[1123]: Time spent on flushing to /var/log/journal/adc17cb9c92a401fa1d4c313f24d0799 is 13.687ms for 947 entries. Jun 25 18:43:55.438413 systemd-journald[1123]: System Journal (/var/log/journal/adc17cb9c92a401fa1d4c313f24d0799) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:43:55.597334 systemd-journald[1123]: Received client request to flush runtime journal. Jun 25 18:43:55.597382 kernel: loop0: detected capacity change from 0 to 80568 Jun 25 18:43:55.597405 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:43:55.597513 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:43:55.597533 kernel: loop1: detected capacity change from 0 to 211296 Jun 25 18:43:55.440995 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:43:55.447162 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:43:55.449829 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:55.451240 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:43:55.466212 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:43:55.467766 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:43:55.475102 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:43:55.490194 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:43:55.509229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:55.555861 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:43:55.565134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:43:55.587114 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:43:55.587134 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jun 25 18:43:55.587149 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jun 25 18:43:55.589304 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:43:55.599314 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:43:55.601860 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:43:55.604067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:55.618045 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:43:55.619139 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:43:55.635980 kernel: loop2: detected capacity change from 0 to 139760 Jun 25 18:43:55.684980 kernel: loop3: detected capacity change from 0 to 80568 Jun 25 18:43:55.694243 kernel: loop4: detected capacity change from 0 to 211296 Jun 25 18:43:55.703013 kernel: loop5: detected capacity change from 0 to 139760 Jun 25 18:43:55.714032 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:43:55.714609 (sd-merge)[1187]: Merged extensions into '/usr'. Jun 25 18:43:55.720191 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:43:55.720208 systemd[1]: Reloading... Jun 25 18:43:55.786022 zram_generator::config[1214]: No configuration found. Jun 25 18:43:55.837256 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:43:55.909650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:55.959615 systemd[1]: Reloading finished in 238 ms. Jun 25 18:43:55.998185 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:43:55.999690 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:43:56.017203 systemd[1]: Starting ensure-sysext.service... Jun 25 18:43:56.019473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:43:56.028495 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:43:56.028510 systemd[1]: Reloading... Jun 25 18:43:56.059687 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:43:56.060077 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:43:56.061046 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:43:56.061344 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jun 25 18:43:56.061416 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jun 25 18:43:56.065142 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:43:56.065216 systemd-tmpfiles[1249]: Skipping /boot Jun 25 18:43:56.075799 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:43:56.077149 systemd-tmpfiles[1249]: Skipping /boot Jun 25 18:43:56.085162 zram_generator::config[1277]: No configuration found. Jun 25 18:43:56.192710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:43:56.242231 systemd[1]: Reloading finished in 213 ms. Jun 25 18:43:56.261771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:43:56.274445 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:56.284034 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:43:56.286886 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:43:56.289183 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:43:56.294673 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:43:56.299085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:56.306190 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:43:56.309822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:56.310018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:56.311592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:56.317897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:56.320897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:56.322061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:56.323826 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:43:56.325050 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:56.325938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:56.326141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:56.330087 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:56.330361 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:56.335056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:56.335828 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:56.340099 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Jun 25 18:43:56.344301 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:43:56.347230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:56.347546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:56.352236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:56.358510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:56.361392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:56.362521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:56.366037 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:43:56.367305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:56.368154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:56.370561 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:43:56.372564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:56.372732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:56.374603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:56.374773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:56.376900 augenrules[1345]: No rules Jun 25 18:43:56.377548 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:56.377757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:56.387444 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:43:56.390446 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:43:56.396372 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:43:56.411054 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:43:56.418275 systemd[1]: Finished ensure-sysext.service. Jun 25 18:43:56.424970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:56.425114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:56.433145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:56.435644 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1374) Jun 25 18:43:56.446981 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1349) Jun 25 18:43:56.440278 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:43:56.444693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:56.450378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:56.453115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:43:56.455475 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:43:56.458733 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:43:56.460590 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:43:56.460616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:56.461194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:56.461368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:56.463095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:56.463316 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:56.468631 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:43:56.469104 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:43:56.475260 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:43:56.477227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:43:56.480821 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:56.482016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:56.485650 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:43:56.513502 systemd-resolved[1317]: Positive Trust Anchors: Jun 25 18:43:56.513917 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:43:56.514039 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:43:56.519742 systemd-resolved[1317]: Defaulting to hostname 'linux'. Jun 25 18:43:56.523631 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:43:56.525046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:56.532975 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 18:43:56.540040 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:43:56.537380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:43:56.546138 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:43:56.546635 systemd-networkd[1389]: lo: Link UP Jun 25 18:43:56.546640 systemd-networkd[1389]: lo: Gained carrier Jun 25 18:43:56.548229 systemd-networkd[1389]: Enumeration completed Jun 25 18:43:56.548311 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:43:56.548632 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:56.548636 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:56.549744 systemd-networkd[1389]: eth0: Link UP Jun 25 18:43:56.549748 systemd-networkd[1389]: eth0: Gained carrier Jun 25 18:43:56.549760 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:56.550161 systemd[1]: Reached target network.target - Network. Jun 25 18:43:56.552745 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:43:56.559917 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:43:56.561453 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:43:56.563730 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:43:56.565098 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Jun 25 18:43:57.022900 systemd-resolved[1317]: Clock change detected. Flushing caches. Jun 25 18:43:57.023000 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:43:57.023069 systemd-timesyncd[1390]: Initial clock synchronization to Tue 2024-06-25 18:43:57.022867 UTC. Jun 25 18:43:57.027106 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:43:57.031053 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 18:43:57.069049 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jun 25 18:43:57.132051 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:43:57.132355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:57.149064 kernel: kvm_amd: TSC scaling supported Jun 25 18:43:57.149232 kernel: kvm_amd: Nested Virtualization enabled Jun 25 18:43:57.149264 kernel: kvm_amd: Nested Paging enabled Jun 25 18:43:57.149296 kernel: kvm_amd: LBR virtualization supported Jun 25 18:43:57.149321 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 25 18:43:57.149359 kernel: kvm_amd: Virtual GIF supported Jun 25 18:43:57.172041 kernel: EDAC MC: Ver: 3.0.0 Jun 25 18:43:57.214809 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:43:57.232306 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:43:57.233883 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:57.243158 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:43:57.274673 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:43:57.276278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:57.277425 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:57.278635 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:43:57.279912 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:43:57.281377 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:43:57.282614 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:43:57.283907 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:43:57.285161 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:43:57.285194 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:57.286237 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:57.288322 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:43:57.291469 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:43:57.300882 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:43:57.303495 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:43:57.305165 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:43:57.306324 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:57.307312 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:57.308292 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:57.308320 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:43:57.309460 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:43:57.311506 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:43:57.316038 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:43:57.316517 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:43:57.318263 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:43:57.319422 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:43:57.322434 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:43:57.329064 jq[1425]: false Jun 25 18:43:57.329291 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:43:57.332266 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:43:57.339154 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:43:57.343239 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:43:57.345534 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:43:57.345935 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:43:57.349190 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:43:57.352047 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:43:57.355106 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:43:57.355971 dbus-daemon[1424]: [system] SELinux support is enabled Jun 25 18:43:57.356543 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:43:57.359988 extend-filesystems[1426]: Found loop3 Jun 25 18:43:57.359988 extend-filesystems[1426]: Found loop4 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found loop5 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found sr0 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found vda Jun 25 18:43:57.361929 extend-filesystems[1426]: Found vda1 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found vda2 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found vda3 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found usr Jun 25 18:43:57.361929 extend-filesystems[1426]: Found vda4 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found vda6 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found vda7 Jun 25 18:43:57.361929 extend-filesystems[1426]: Found vda9 Jun 25 18:43:57.361929 extend-filesystems[1426]: Checking size of /dev/vda9 Jun 25 18:43:57.369136 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:43:57.369337 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:43:57.379440 jq[1439]: true Jun 25 18:43:57.369648 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:43:57.369835 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:43:57.374407 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:43:57.374637 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:43:57.383725 extend-filesystems[1426]: Resized partition /dev/vda9 Jun 25 18:43:57.386851 extend-filesystems[1451]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:43:57.393056 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:43:57.398154 jq[1449]: true Jun 25 18:43:57.407713 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:43:57.408251 update_engine[1438]: I0625 18:43:57.407373 1438 main.cc:92] Flatcar Update Engine starting Jun 25 18:43:57.412612 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1352) Jun 25 18:43:57.412667 update_engine[1438]: I0625 18:43:57.409876 1438 update_check_scheduler.cc:74] Next update check in 2m25s Jun 25 18:43:57.427928 tar[1447]: linux-amd64/helm Jun 25 18:43:57.435507 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:43:57.437181 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:43:57.437397 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:43:57.437428 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:43:57.437434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:43:57.440062 systemd-logind[1437]: New seat seat0. Jun 25 18:43:57.440139 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:43:57.440160 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:43:57.442415 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:43:57.451477 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:43:57.453238 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:43:57.467133 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:43:57.467133 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:43:57.467133 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:43:57.470995 extend-filesystems[1426]: Resized filesystem in /dev/vda9 Jun 25 18:43:57.469353 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:43:57.469639 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:43:57.496710 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:43:57.498286 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:43:57.501458 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:43:57.501760 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:43:57.548142 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:43:57.576231 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:43:57.584246 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:43:57.592487 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:43:57.592699 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:43:57.599373 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:43:57.611804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:43:57.619363 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:43:57.621863 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:43:57.623186 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:43:57.648296 containerd[1450]: time="2024-06-25T18:43:57.648195241Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:43:57.676461 containerd[1450]: time="2024-06-25T18:43:57.676220127Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:43:57.676461 containerd[1450]: time="2024-06-25T18:43:57.676267356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:57.678057 containerd[1450]: time="2024-06-25T18:43:57.677994454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:57.678153 containerd[1450]: time="2024-06-25T18:43:57.678135779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:57.678465 containerd[1450]: time="2024-06-25T18:43:57.678441893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:57.678539 containerd[1450]: time="2024-06-25T18:43:57.678524147Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:43:57.678709 containerd[1450]: time="2024-06-25T18:43:57.678690840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:57.678834 containerd[1450]: time="2024-06-25T18:43:57.678816345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:57.678895 containerd[1450]: time="2024-06-25T18:43:57.678881367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:57.679084 containerd[1450]: time="2024-06-25T18:43:57.679063338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:57.679434 containerd[1450]: time="2024-06-25T18:43:57.679414056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:57.679506 containerd[1450]: time="2024-06-25T18:43:57.679488005Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:43:57.679576 containerd[1450]: time="2024-06-25T18:43:57.679559969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:43:57.679782 containerd[1450]: time="2024-06-25T18:43:57.679762930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:43:57.679842 containerd[1450]: time="2024-06-25T18:43:57.679828022Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:43:57.679974 containerd[1450]: time="2024-06-25T18:43:57.679955100Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:43:57.680131 containerd[1450]: time="2024-06-25T18:43:57.680042514Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:43:57.684865 containerd[1450]: time="2024-06-25T18:43:57.684844338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:43:57.684939 containerd[1450]: time="2024-06-25T18:43:57.684925430Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:43:57.685008 containerd[1450]: time="2024-06-25T18:43:57.684983919Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:43:57.685230 containerd[1450]: time="2024-06-25T18:43:57.685101279Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:43:57.685230 containerd[1450]: time="2024-06-25T18:43:57.685128761Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:43:57.685230 containerd[1450]: time="2024-06-25T18:43:57.685143288Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:43:57.685230 containerd[1450]: time="2024-06-25T18:43:57.685157345Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:43:57.685458 containerd[1450]: time="2024-06-25T18:43:57.685438702Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:43:57.685526 containerd[1450]: time="2024-06-25T18:43:57.685510176Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:43:57.685584 containerd[1450]: time="2024-06-25T18:43:57.685571181Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685630121Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685652152Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685674073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685692638Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685708568Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685726291Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685743804Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685759033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685775043Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:43:57.686235 containerd[1450]: time="2024-06-25T18:43:57.685897713Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:43:57.688395 containerd[1450]: time="2024-06-25T18:43:57.688372353Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:43:57.688493 containerd[1450]: time="2024-06-25T18:43:57.688475937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.688565 containerd[1450]: time="2024-06-25T18:43:57.688550938Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:43:57.688638 containerd[1450]: time="2024-06-25T18:43:57.688624425Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:43:57.688760 containerd[1450]: time="2024-06-25T18:43:57.688744060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.688820 containerd[1450]: time="2024-06-25T18:43:57.688807509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.688884 containerd[1450]: time="2024-06-25T18:43:57.688869184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.688954 containerd[1450]: time="2024-06-25T18:43:57.688938695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689058 containerd[1450]: time="2024-06-25T18:43:57.689041788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689128 containerd[1450]: time="2024-06-25T18:43:57.689113262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689195 containerd[1450]: time="2024-06-25T18:43:57.689180508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689253 containerd[1450]: time="2024-06-25T18:43:57.689240030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689312 containerd[1450]: time="2024-06-25T18:43:57.689299451Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:43:57.689570 containerd[1450]: time="2024-06-25T18:43:57.689546735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689674 containerd[1450]: time="2024-06-25T18:43:57.689654948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689744 containerd[1450]: time="2024-06-25T18:43:57.689730529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689807 containerd[1450]: time="2024-06-25T18:43:57.689793938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689866 containerd[1450]: time="2024-06-25T18:43:57.689853159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689925 containerd[1450]: time="2024-06-25T18:43:57.689912470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.689982 containerd[1450]: time="2024-06-25T18:43:57.689969157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.690083 containerd[1450]: time="2024-06-25T18:43:57.690066279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:43:57.690508 containerd[1450]: time="2024-06-25T18:43:57.690429119Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:43:57.690715 containerd[1450]: time="2024-06-25T18:43:57.690699516Z" level=info msg="Connect containerd service" Jun 25 18:43:57.690792 containerd[1450]: time="2024-06-25T18:43:57.690778334Z" level=info msg="using legacy CRI server" Jun 25 18:43:57.690846 containerd[1450]: time="2024-06-25T18:43:57.690833277Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:43:57.691041 containerd[1450]: time="2024-06-25T18:43:57.691006512Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:43:57.691774 containerd[1450]: time="2024-06-25T18:43:57.691748914Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:43:57.691888 containerd[1450]: time="2024-06-25T18:43:57.691869510Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:43:57.692027 containerd[1450]: time="2024-06-25T18:43:57.691941224Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:43:57.692027 containerd[1450]: time="2024-06-25T18:43:57.691959318Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:43:57.692102 containerd[1450]: time="2024-06-25T18:43:57.691964027Z" level=info msg="Start subscribing containerd event" Jun 25 18:43:57.692102 containerd[1450]: time="2024-06-25T18:43:57.692075646Z" level=info msg="Start recovering state" Jun 25 18:43:57.692181 containerd[1450]: time="2024-06-25T18:43:57.692157510Z" level=info msg="Start event monitor" Jun 25 18:43:57.692210 containerd[1450]: time="2024-06-25T18:43:57.692182797Z" level=info msg="Start snapshots syncer" Jun 25 18:43:57.692210 containerd[1450]: time="2024-06-25T18:43:57.692195471Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:43:57.692210 containerd[1450]: time="2024-06-25T18:43:57.692203436Z" level=info msg="Start streaming server" Jun 25 18:43:57.692988 containerd[1450]: time="2024-06-25T18:43:57.691975058Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:43:57.692988 containerd[1450]: time="2024-06-25T18:43:57.692641367Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:43:57.692988 containerd[1450]: time="2024-06-25T18:43:57.692696210Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:43:57.692874 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:43:57.693231 containerd[1450]: time="2024-06-25T18:43:57.693213610Z" level=info msg="containerd successfully booted in 0.046334s" Jun 25 18:43:57.862529 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:43:57.869328 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:43688.service - OpenSSH per-connection server daemon (10.0.0.1:43688). Jun 25 18:43:57.874353 tar[1447]: linux-amd64/LICENSE Jun 25 18:43:57.874353 tar[1447]: linux-amd64/README.md Jun 25 18:43:57.887825 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:43:57.910895 sshd[1513]: Accepted publickey for core from 10.0.0.1 port 43688 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:43:57.912665 sshd[1513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:57.921280 systemd-logind[1437]: New session 1 of user core. Jun 25 18:43:57.922567 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:43:57.938233 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:43:57.950571 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:43:57.954620 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:43:57.963292 (systemd)[1520]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:58.074105 systemd[1520]: Queued start job for default target default.target. Jun 25 18:43:58.092275 systemd[1520]: Created slice app.slice - User Application Slice. Jun 25 18:43:58.092300 systemd[1520]: Reached target paths.target - Paths. Jun 25 18:43:58.092313 systemd[1520]: Reached target timers.target - Timers. Jun 25 18:43:58.093837 systemd[1520]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:43:58.105178 systemd-networkd[1389]: eth0: Gained IPv6LL Jun 25 18:43:58.105767 systemd[1520]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:43:58.105902 systemd[1520]: Reached target sockets.target - Sockets. Jun 25 18:43:58.105922 systemd[1520]: Reached target basic.target - Basic System. Jun 25 18:43:58.105960 systemd[1520]: Reached target default.target - Main User Target. Jun 25 18:43:58.106028 systemd[1520]: Startup finished in 136ms. Jun 25 18:43:58.106224 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:43:58.108794 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:43:58.110315 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:43:58.113066 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:43:58.125242 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:43:58.127668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:43:58.129772 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:43:58.151120 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:43:58.152728 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:43:58.152926 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:43:58.155839 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:43:58.192122 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:36934.service - OpenSSH per-connection server daemon (10.0.0.1:36934). Jun 25 18:43:58.226606 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 36934 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:43:58.228166 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:58.233390 systemd-logind[1437]: New session 2 of user core. Jun 25 18:43:58.239135 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:43:58.295535 sshd[1548]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:58.309585 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:36934.service: Deactivated successfully. Jun 25 18:43:58.311428 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:43:58.313435 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:43:58.321326 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:36950.service - OpenSSH per-connection server daemon (10.0.0.1:36950). Jun 25 18:43:58.323708 systemd-logind[1437]: Removed session 2. Jun 25 18:43:58.351373 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 36950 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:43:58.353146 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:43:58.358879 systemd-logind[1437]: New session 3 of user core. Jun 25 18:43:58.377211 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:43:58.435597 sshd[1555]: pam_unix(sshd:session): session closed for user core Jun 25 18:43:58.440873 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:36950.service: Deactivated successfully. Jun 25 18:43:58.443488 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:43:58.444129 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:43:58.445162 systemd-logind[1437]: Removed session 3. Jun 25 18:43:58.790490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:43:58.792499 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:43:58.796155 systemd[1]: Startup finished in 867ms (kernel) + 5.858s (initrd) + 3.812s (userspace) = 10.538s. Jun 25 18:43:58.811549 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:43:59.299124 kubelet[1567]: E0625 18:43:59.299044 1567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:43:59.304343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:43:59.304603 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:43:59.305040 systemd[1]: kubelet.service: Consumed 1.029s CPU time. Jun 25 18:44:08.446700 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:52450.service - OpenSSH per-connection server daemon (10.0.0.1:52450). Jun 25 18:44:08.477000 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 52450 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:44:08.478452 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:08.482404 systemd-logind[1437]: New session 4 of user core. Jun 25 18:44:08.493177 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:44:08.547101 sshd[1582]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:08.559651 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:52450.service: Deactivated successfully. Jun 25 18:44:08.561503 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:44:08.562972 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:44:08.571335 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:52466.service - OpenSSH per-connection server daemon (10.0.0.1:52466). Jun 25 18:44:08.572451 systemd-logind[1437]: Removed session 4. Jun 25 18:44:08.598714 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 52466 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:44:08.600359 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:08.604354 systemd-logind[1437]: New session 5 of user core. Jun 25 18:44:08.620213 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:44:08.671202 sshd[1589]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:08.690463 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:52466.service: Deactivated successfully. Jun 25 18:44:08.692454 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:44:08.693901 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:44:08.695136 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:52468.service - OpenSSH per-connection server daemon (10.0.0.1:52468). Jun 25 18:44:08.695934 systemd-logind[1437]: Removed session 5. Jun 25 18:44:08.725774 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 52468 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:44:08.727306 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:08.731586 systemd-logind[1437]: New session 6 of user core. Jun 25 18:44:08.740136 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:44:08.795971 sshd[1596]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:08.811363 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:52468.service: Deactivated successfully. Jun 25 18:44:08.813161 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:44:08.814577 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:44:08.815865 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:52484.service - OpenSSH per-connection server daemon (10.0.0.1:52484). Jun 25 18:44:08.816832 systemd-logind[1437]: Removed session 6. Jun 25 18:44:08.846574 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 52484 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:44:08.848082 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:08.853028 systemd-logind[1437]: New session 7 of user core. Jun 25 18:44:08.862163 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:44:08.923564 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:44:08.923882 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:08.943425 sudo[1606]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:08.945655 sshd[1603]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:08.956551 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:52484.service: Deactivated successfully. Jun 25 18:44:08.958787 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:44:08.960382 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:44:08.972416 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:52488.service - OpenSSH per-connection server daemon (10.0.0.1:52488). Jun 25 18:44:08.973412 systemd-logind[1437]: Removed session 7. Jun 25 18:44:09.001165 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 52488 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:44:09.002760 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:09.006915 systemd-logind[1437]: New session 8 of user core. Jun 25 18:44:09.013189 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:44:09.068915 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:44:09.069307 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:09.073775 sudo[1615]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:09.080250 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:44:09.080551 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:09.100313 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:09.101943 auditctl[1618]: No rules Jun 25 18:44:09.103354 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:44:09.103635 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:09.105513 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:09.141179 augenrules[1636]: No rules Jun 25 18:44:09.143138 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:09.144507 sudo[1614]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:09.146764 sshd[1611]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:09.162501 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:52488.service: Deactivated successfully. Jun 25 18:44:09.164334 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:44:09.165112 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:44:09.175308 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:52504.service - OpenSSH per-connection server daemon (10.0.0.1:52504). Jun 25 18:44:09.175969 systemd-logind[1437]: Removed session 8. Jun 25 18:44:09.201683 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 52504 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:44:09.203300 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:09.207409 systemd-logind[1437]: New session 9 of user core. Jun 25 18:44:09.217153 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:44:09.272398 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:44:09.272758 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:09.373309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:44:09.382226 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:44:09.382428 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:44:09.383366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:09.558068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:09.563874 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:09.612567 kubelet[1672]: E0625 18:44:09.612502 1672 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:09.620475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:09.620724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:09.647646 dockerd[1657]: time="2024-06-25T18:44:09.647585053Z" level=info msg="Starting up" Jun 25 18:44:09.906006 dockerd[1657]: time="2024-06-25T18:44:09.905864187Z" level=info msg="Loading containers: start." Jun 25 18:44:10.019041 kernel: Initializing XFRM netlink socket Jun 25 18:44:10.101125 systemd-networkd[1389]: docker0: Link UP Jun 25 18:44:10.135383 dockerd[1657]: time="2024-06-25T18:44:10.135313875Z" level=info msg="Loading containers: done." Jun 25 18:44:10.184500 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4125709860-merged.mount: Deactivated successfully. Jun 25 18:44:10.187661 dockerd[1657]: time="2024-06-25T18:44:10.187608587Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:44:10.187876 dockerd[1657]: time="2024-06-25T18:44:10.187846544Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:44:10.187999 dockerd[1657]: time="2024-06-25T18:44:10.187977940Z" level=info msg="Daemon has completed initialization" Jun 25 18:44:10.218828 dockerd[1657]: time="2024-06-25T18:44:10.218758053Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:44:10.218949 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:44:11.021496 containerd[1450]: time="2024-06-25T18:44:11.021461912Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 25 18:44:16.826387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215260667.mount: Deactivated successfully. Jun 25 18:44:18.214776 containerd[1450]: time="2024-06-25T18:44:18.214706654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:18.215705 containerd[1450]: time="2024-06-25T18:44:18.215626108Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jun 25 18:44:18.217122 containerd[1450]: time="2024-06-25T18:44:18.217078912Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:18.221061 containerd[1450]: time="2024-06-25T18:44:18.220990917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:18.222379 containerd[1450]: time="2024-06-25T18:44:18.222330980Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 7.200829314s" Jun 25 18:44:18.222379 containerd[1450]: time="2024-06-25T18:44:18.222370955Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jun 25 18:44:18.245397 containerd[1450]: time="2024-06-25T18:44:18.245338529Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 25 18:44:19.674904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:44:19.733344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:19.924616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:19.929986 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:20.439750 kubelet[1885]: E0625 18:44:20.439662 1885 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:20.443192 containerd[1450]: time="2024-06-25T18:44:20.443138042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:20.444166 containerd[1450]: time="2024-06-25T18:44:20.443949013Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jun 25 18:44:20.444820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:20.445035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:20.445130 containerd[1450]: time="2024-06-25T18:44:20.445088319Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:20.448494 containerd[1450]: time="2024-06-25T18:44:20.448436828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:20.449846 containerd[1450]: time="2024-06-25T18:44:20.449806135Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.204410629s" Jun 25 18:44:20.449908 containerd[1450]: time="2024-06-25T18:44:20.449845349Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jun 25 18:44:20.475904 containerd[1450]: time="2024-06-25T18:44:20.475859465Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 18:44:22.482267 containerd[1450]: time="2024-06-25T18:44:22.482198510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:22.483206 containerd[1450]: time="2024-06-25T18:44:22.483164521Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jun 25 18:44:22.484872 containerd[1450]: time="2024-06-25T18:44:22.484838099Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:22.488151 containerd[1450]: time="2024-06-25T18:44:22.488100456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:22.489112 containerd[1450]: time="2024-06-25T18:44:22.489052031Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 2.013145317s" Jun 25 18:44:22.489112 containerd[1450]: time="2024-06-25T18:44:22.489087106Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jun 25 18:44:22.512759 containerd[1450]: time="2024-06-25T18:44:22.512715449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 25 18:44:23.933547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282387947.mount: Deactivated successfully. Jun 25 18:44:24.406291 containerd[1450]: time="2024-06-25T18:44:24.406175507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.406956 containerd[1450]: time="2024-06-25T18:44:24.406922627Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jun 25 18:44:24.408222 containerd[1450]: time="2024-06-25T18:44:24.408189062Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.410378 containerd[1450]: time="2024-06-25T18:44:24.410342229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.410779 containerd[1450]: time="2024-06-25T18:44:24.410748040Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 1.897996583s" Jun 25 18:44:24.410779 containerd[1450]: time="2024-06-25T18:44:24.410776944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jun 25 18:44:24.434992 containerd[1450]: time="2024-06-25T18:44:24.434938958Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:44:25.110809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912717972.mount: Deactivated successfully. Jun 25 18:44:26.230741 containerd[1450]: time="2024-06-25T18:44:26.230679541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.231463 containerd[1450]: time="2024-06-25T18:44:26.231397036Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 18:44:26.232673 containerd[1450]: time="2024-06-25T18:44:26.232632323Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.238868 containerd[1450]: time="2024-06-25T18:44:26.238817951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.239815 containerd[1450]: time="2024-06-25T18:44:26.239771949Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.804786093s" Jun 25 18:44:26.239815 containerd[1450]: time="2024-06-25T18:44:26.239809380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 18:44:26.266536 containerd[1450]: time="2024-06-25T18:44:26.266475689Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:44:26.787083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411141036.mount: Deactivated successfully. Jun 25 18:44:26.792947 containerd[1450]: time="2024-06-25T18:44:26.792888703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.793537 containerd[1450]: time="2024-06-25T18:44:26.793480322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:44:26.794555 containerd[1450]: time="2024-06-25T18:44:26.794522857Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.796719 containerd[1450]: time="2024-06-25T18:44:26.796676916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.797717 containerd[1450]: time="2024-06-25T18:44:26.797672914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 531.147591ms" Jun 25 18:44:26.797717 containerd[1450]: time="2024-06-25T18:44:26.797715503Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:44:26.824104 containerd[1450]: time="2024-06-25T18:44:26.824059498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:44:27.936945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341908059.mount: Deactivated successfully. Jun 25 18:44:30.349428 containerd[1450]: time="2024-06-25T18:44:30.349359263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:30.350108 containerd[1450]: time="2024-06-25T18:44:30.350027987Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 18:44:30.351209 containerd[1450]: time="2024-06-25T18:44:30.351168388Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:30.354287 containerd[1450]: time="2024-06-25T18:44:30.354246681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:30.355503 containerd[1450]: time="2024-06-25T18:44:30.355464280Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.531356802s" Jun 25 18:44:30.355503 containerd[1450]: time="2024-06-25T18:44:30.355498957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:44:30.675083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:44:30.693343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:30.873660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:30.878844 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:31.017361 kubelet[2049]: E0625 18:44:31.017170 2049 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:31.022097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:31.022350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:38.448311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:38.457411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:38.483333 systemd[1]: Reloading requested from client PID 2127 ('systemctl') (unit session-9.scope)... Jun 25 18:44:38.483350 systemd[1]: Reloading... Jun 25 18:44:38.593065 zram_generator::config[2164]: No configuration found. Jun 25 18:44:38.962287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:39.043651 systemd[1]: Reloading finished in 559 ms. Jun 25 18:44:39.093366 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:44:39.093472 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:44:39.093766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:39.095551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:39.251449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:39.265300 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:39.356589 kubelet[2212]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:39.356589 kubelet[2212]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:39.356589 kubelet[2212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:39.356980 kubelet[2212]: I0625 18:44:39.356630 2212 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:39.825681 kubelet[2212]: I0625 18:44:39.825600 2212 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 18:44:39.825681 kubelet[2212]: I0625 18:44:39.825672 2212 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:39.826009 kubelet[2212]: I0625 18:44:39.825985 2212 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 18:44:39.977595 kubelet[2212]: E0625 18:44:39.977533 2212 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:39.979680 kubelet[2212]: I0625 18:44:39.979641 2212 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:39.995744 kubelet[2212]: I0625 18:44:39.995683 2212 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:39.997744 kubelet[2212]: I0625 18:44:39.997711 2212 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:39.997999 kubelet[2212]: I0625 18:44:39.997944 2212 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:39.997999 kubelet[2212]: I0625 18:44:39.997974 2212 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:39.997999 kubelet[2212]: I0625 18:44:39.997985 2212 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:39.998199 kubelet[2212]: I0625 18:44:39.998172 2212 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:39.998337 kubelet[2212]: I0625 18:44:39.998293 2212 kubelet.go:396] "Attempting to sync node with API server" Jun 25 18:44:39.998337 kubelet[2212]: I0625 18:44:39.998317 2212 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:39.998423 kubelet[2212]: I0625 18:44:39.998359 2212 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:44:39.998423 kubelet[2212]: I0625 18:44:39.998374 2212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:39.998892 kubelet[2212]: W0625 18:44:39.998826 2212 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:39.998892 kubelet[2212]: E0625 18:44:39.998883 2212 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:39.999090 kubelet[2212]: W0625 18:44:39.998939 2212 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:39.999090 kubelet[2212]: E0625 18:44:39.998983 2212 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:40.000135 kubelet[2212]: I0625 18:44:40.000112 2212 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:40.003345 kubelet[2212]: I0625 18:44:40.003304 2212 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:44:40.003387 kubelet[2212]: W0625 18:44:40.003373 2212 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:44:40.004045 kubelet[2212]: I0625 18:44:40.003990 2212 server.go:1256] "Started kubelet" Jun 25 18:44:40.004721 kubelet[2212]: I0625 18:44:40.004164 2212 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:44:40.004721 kubelet[2212]: I0625 18:44:40.004489 2212 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:40.004721 kubelet[2212]: I0625 18:44:40.004550 2212 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:40.005405 kubelet[2212]: I0625 18:44:40.005377 2212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:40.005637 kubelet[2212]: I0625 18:44:40.005608 2212 server.go:461] "Adding debug handlers to kubelet server" Jun 25 18:44:40.007788 kubelet[2212]: E0625 18:44:40.006980 2212 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:40.007788 kubelet[2212]: I0625 18:44:40.007022 2212 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:40.007788 kubelet[2212]: I0625 18:44:40.007072 2212 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:40.007788 kubelet[2212]: I0625 18:44:40.007111 2212 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:40.007788 kubelet[2212]: W0625 18:44:40.007341 2212 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:40.007788 kubelet[2212]: E0625 18:44:40.007370 2212 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:40.009401 kubelet[2212]: E0625 18:44:40.009041 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" Jun 25 18:44:40.009401 kubelet[2212]: E0625 18:44:40.009288 2212 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:40.009673 kubelet[2212]: I0625 18:44:40.009502 2212 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:44:40.009723 kubelet[2212]: I0625 18:44:40.009672 2212 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:44:40.010444 kubelet[2212]: I0625 18:44:40.010419 2212 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:44:40.106465 kubelet[2212]: E0625 18:44:40.106294 2212 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc539a515f66a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 18:44:40.003962536 +0000 UTC m=+0.694935513,LastTimestamp:2024-06-25 18:44:40.003962536 +0000 UTC m=+0.694935513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 18:44:40.113743 kubelet[2212]: I0625 18:44:40.112813 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:40.113743 kubelet[2212]: E0625 18:44:40.113266 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jun 25 18:44:40.115605 kubelet[2212]: I0625 18:44:40.115573 2212 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:40.115605 kubelet[2212]: I0625 18:44:40.115598 2212 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:40.115687 kubelet[2212]: I0625 18:44:40.115619 2212 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:40.118487 kubelet[2212]: I0625 18:44:40.118460 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:40.121556 kubelet[2212]: I0625 18:44:40.121516 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:40.121600 kubelet[2212]: I0625 18:44:40.121576 2212 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:40.121633 kubelet[2212]: I0625 18:44:40.121601 2212 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 18:44:40.121793 kubelet[2212]: E0625 18:44:40.121671 2212 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:40.122664 kubelet[2212]: W0625 18:44:40.122627 2212 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:40.122711 kubelet[2212]: E0625 18:44:40.122670 2212 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:40.210597 kubelet[2212]: E0625 18:44:40.210542 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" Jun 25 18:44:40.222690 kubelet[2212]: E0625 18:44:40.222620 2212 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:44:40.315721 kubelet[2212]: I0625 18:44:40.315633 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:40.316093 kubelet[2212]: E0625 18:44:40.316063 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jun 25 18:44:40.368756 kubelet[2212]: I0625 18:44:40.368596 2212 policy_none.go:49] "None policy: Start" Jun 25 18:44:40.369607 kubelet[2212]: I0625 18:44:40.369560 2212 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:44:40.369651 kubelet[2212]: I0625 18:44:40.369617 2212 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:40.378558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:44:40.391000 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:44:40.407244 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:44:40.408855 kubelet[2212]: I0625 18:44:40.408831 2212 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:40.409317 kubelet[2212]: I0625 18:44:40.409207 2212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:40.410444 kubelet[2212]: E0625 18:44:40.410429 2212 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:44:40.423805 kubelet[2212]: I0625 18:44:40.423738 2212 topology_manager.go:215] "Topology Admit Handler" podUID="287361fe2e7b285676c0e2255c51e91c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:44:40.425289 kubelet[2212]: I0625 18:44:40.425244 2212 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:44:40.426911 kubelet[2212]: I0625 18:44:40.426604 2212 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:44:40.433600 systemd[1]: Created slice kubepods-burstable-pod287361fe2e7b285676c0e2255c51e91c.slice - libcontainer container kubepods-burstable-pod287361fe2e7b285676c0e2255c51e91c.slice. Jun 25 18:44:40.459403 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice - libcontainer container kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jun 25 18:44:40.470853 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice - libcontainer container kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jun 25 18:44:40.508345 kubelet[2212]: I0625 18:44:40.508284 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/287361fe2e7b285676c0e2255c51e91c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"287361fe2e7b285676c0e2255c51e91c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:40.508345 kubelet[2212]: I0625 18:44:40.508338 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/287361fe2e7b285676c0e2255c51e91c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"287361fe2e7b285676c0e2255c51e91c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:40.508345 kubelet[2212]: I0625 18:44:40.508362 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:40.508622 kubelet[2212]: I0625 18:44:40.508383 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:40.508622 kubelet[2212]: I0625 18:44:40.508402 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/287361fe2e7b285676c0e2255c51e91c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"287361fe2e7b285676c0e2255c51e91c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:40.508622 kubelet[2212]: I0625 18:44:40.508467 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:40.508622 kubelet[2212]: I0625 18:44:40.508547 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:40.508622 kubelet[2212]: I0625 18:44:40.508619 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:40.508763 kubelet[2212]: I0625 18:44:40.508646 2212 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:44:40.612154 kubelet[2212]: E0625 18:44:40.612093 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" Jun 25 18:44:40.717882 kubelet[2212]: I0625 18:44:40.717845 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:40.718275 kubelet[2212]: E0625 18:44:40.718246 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jun 25 18:44:40.755560 kubelet[2212]: E0625 18:44:40.755535 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.756161 containerd[1450]: time="2024-06-25T18:44:40.756102797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:287361fe2e7b285676c0e2255c51e91c,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:40.762349 kubelet[2212]: E0625 18:44:40.762334 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.762792 containerd[1450]: time="2024-06-25T18:44:40.762687253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:40.774107 kubelet[2212]: E0625 18:44:40.774042 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.774486 containerd[1450]: time="2024-06-25T18:44:40.774457145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:41.043999 kubelet[2212]: W0625 18:44:41.043874 2212 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:41.043999 kubelet[2212]: E0625 18:44:41.043922 2212 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:41.274156 kubelet[2212]: W0625 18:44:41.274086 2212 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:41.274286 kubelet[2212]: E0625 18:44:41.274173 2212 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:41.361379 kubelet[2212]: W0625 18:44:41.361215 2212 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:41.361379 kubelet[2212]: E0625 18:44:41.361283 2212 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:41.412969 kubelet[2212]: E0625 18:44:41.412943 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="1.6s" Jun 25 18:44:41.492620 kubelet[2212]: W0625 18:44:41.492544 2212 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:41.492620 kubelet[2212]: E0625 18:44:41.492616 2212 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:41.519899 kubelet[2212]: I0625 18:44:41.519870 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:41.520243 kubelet[2212]: E0625 18:44:41.520189 2212 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jun 25 18:44:42.055133 kubelet[2212]: E0625 18:44:42.055078 2212 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.136:6443: connect: connection refused Jun 25 18:44:42.101531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729298006.mount: Deactivated successfully. Jun 25 18:44:42.112114 containerd[1450]: time="2024-06-25T18:44:42.112032267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.115771 containerd[1450]: time="2024-06-25T18:44:42.115721123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.116797 containerd[1450]: time="2024-06-25T18:44:42.116753390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:42.117903 containerd[1450]: time="2024-06-25T18:44:42.117858455Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.121248 containerd[1450]: time="2024-06-25T18:44:42.121175415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:42.123479 containerd[1450]: time="2024-06-25T18:44:42.123435190Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.124587 containerd[1450]: time="2024-06-25T18:44:42.124521420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:44:42.126919 containerd[1450]: time="2024-06-25T18:44:42.126874372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:42.128801 containerd[1450]: time="2024-06-25T18:44:42.128756861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.372542682s" Jun 25 18:44:42.129495 containerd[1450]: time="2024-06-25T18:44:42.129461728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.36671891s" Jun 25 18:44:42.130065 containerd[1450]: time="2024-06-25T18:44:42.130035225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.355502908s" Jun 25 18:44:42.261751 update_engine[1438]: I0625 18:44:42.260438 1438 update_attempter.cc:509] Updating boot flags... Jun 25 18:44:42.322666 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2261) Jun 25 18:44:42.404129 containerd[1450]: time="2024-06-25T18:44:42.398415889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:42.404129 containerd[1450]: time="2024-06-25T18:44:42.398551596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:42.404129 containerd[1450]: time="2024-06-25T18:44:42.398573559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:42.404129 containerd[1450]: time="2024-06-25T18:44:42.398617130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:42.404129 containerd[1450]: time="2024-06-25T18:44:42.399135013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:42.404129 containerd[1450]: time="2024-06-25T18:44:42.399169829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:42.404129 containerd[1450]: time="2024-06-25T18:44:42.399186120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:42.404129 containerd[1450]: time="2024-06-25T18:44:42.399198203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:42.404811 containerd[1450]: time="2024-06-25T18:44:42.404588474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:42.404811 containerd[1450]: time="2024-06-25T18:44:42.404645212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:42.404811 containerd[1450]: time="2024-06-25T18:44:42.404673014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:42.404811 containerd[1450]: time="2024-06-25T18:44:42.404687161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:42.408040 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2239) Jun 25 18:44:42.527346 systemd[1]: Started cri-containerd-f1d3b6c2694a337244d39bc06a884ca4807ce73e5df88b9c508c15d89ec6f20b.scope - libcontainer container f1d3b6c2694a337244d39bc06a884ca4807ce73e5df88b9c508c15d89ec6f20b. Jun 25 18:44:42.534889 systemd[1]: Started cri-containerd-994be0fc14cc0feb9fa2bad929c519e17b13ed0b71f6b3353cc512a140552d47.scope - libcontainer container 994be0fc14cc0feb9fa2bad929c519e17b13ed0b71f6b3353cc512a140552d47. Jun 25 18:44:42.536992 systemd[1]: Started cri-containerd-b9d4ee224f35a5ab2bf4db9d618373682824e92af8a70fec87c9537f2440c03e.scope - libcontainer container b9d4ee224f35a5ab2bf4db9d618373682824e92af8a70fec87c9537f2440c03e. Jun 25 18:44:42.577516 containerd[1450]: time="2024-06-25T18:44:42.577467804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1d3b6c2694a337244d39bc06a884ca4807ce73e5df88b9c508c15d89ec6f20b\"" Jun 25 18:44:42.580050 kubelet[2212]: E0625 18:44:42.579991 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:42.590052 containerd[1450]: time="2024-06-25T18:44:42.589961866Z" level=info msg="CreateContainer within sandbox \"f1d3b6c2694a337244d39bc06a884ca4807ce73e5df88b9c508c15d89ec6f20b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:44:42.612075 containerd[1450]: time="2024-06-25T18:44:42.612037477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"994be0fc14cc0feb9fa2bad929c519e17b13ed0b71f6b3353cc512a140552d47\"" Jun 25 18:44:42.613174 kubelet[2212]: E0625 18:44:42.613157 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:42.615939 containerd[1450]: time="2024-06-25T18:44:42.615908357Z" level=info msg="CreateContainer within sandbox \"994be0fc14cc0feb9fa2bad929c519e17b13ed0b71f6b3353cc512a140552d47\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:44:42.616189 containerd[1450]: time="2024-06-25T18:44:42.616158881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:287361fe2e7b285676c0e2255c51e91c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9d4ee224f35a5ab2bf4db9d618373682824e92af8a70fec87c9537f2440c03e\"" Jun 25 18:44:42.617235 kubelet[2212]: E0625 18:44:42.617109 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:42.618701 containerd[1450]: time="2024-06-25T18:44:42.618644485Z" level=info msg="CreateContainer within sandbox \"b9d4ee224f35a5ab2bf4db9d618373682824e92af8a70fec87c9537f2440c03e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:44:42.629662 containerd[1450]: time="2024-06-25T18:44:42.629617413Z" level=info msg="CreateContainer within sandbox \"f1d3b6c2694a337244d39bc06a884ca4807ce73e5df88b9c508c15d89ec6f20b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd361238da7ba3b27129b4d09e9e81838b446dc572ee5f83afddb499d4115537\"" Jun 25 18:44:42.630231 containerd[1450]: time="2024-06-25T18:44:42.630199446Z" level=info msg="StartContainer for \"fd361238da7ba3b27129b4d09e9e81838b446dc572ee5f83afddb499d4115537\"" Jun 25 18:44:42.646363 containerd[1450]: time="2024-06-25T18:44:42.646197673Z" level=info msg="CreateContainer within sandbox \"994be0fc14cc0feb9fa2bad929c519e17b13ed0b71f6b3353cc512a140552d47\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6cc09354505e350c8825f48689e12c686d440696308c5097a20b2b620a5b720c\"" Jun 25 18:44:42.646848 containerd[1450]: time="2024-06-25T18:44:42.646812869Z" level=info msg="StartContainer for \"6cc09354505e350c8825f48689e12c686d440696308c5097a20b2b620a5b720c\"" Jun 25 18:44:42.652739 containerd[1450]: time="2024-06-25T18:44:42.652687569Z" level=info msg="CreateContainer within sandbox \"b9d4ee224f35a5ab2bf4db9d618373682824e92af8a70fec87c9537f2440c03e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6abb32e920282d5d337f38434a91f308845e1eea49abc3805874733205e2574b\"" Jun 25 18:44:42.654034 containerd[1450]: time="2024-06-25T18:44:42.653623093Z" level=info msg="StartContainer for \"6abb32e920282d5d337f38434a91f308845e1eea49abc3805874733205e2574b\"" Jun 25 18:44:42.663377 systemd[1]: Started cri-containerd-fd361238da7ba3b27129b4d09e9e81838b446dc572ee5f83afddb499d4115537.scope - libcontainer container fd361238da7ba3b27129b4d09e9e81838b446dc572ee5f83afddb499d4115537. Jun 25 18:44:42.702291 systemd[1]: Started cri-containerd-6cc09354505e350c8825f48689e12c686d440696308c5097a20b2b620a5b720c.scope - libcontainer container 6cc09354505e350c8825f48689e12c686d440696308c5097a20b2b620a5b720c. Jun 25 18:44:42.708334 systemd[1]: Started cri-containerd-6abb32e920282d5d337f38434a91f308845e1eea49abc3805874733205e2574b.scope - libcontainer container 6abb32e920282d5d337f38434a91f308845e1eea49abc3805874733205e2574b. Jun 25 18:44:42.734355 containerd[1450]: time="2024-06-25T18:44:42.734203723Z" level=info msg="StartContainer for \"fd361238da7ba3b27129b4d09e9e81838b446dc572ee5f83afddb499d4115537\" returns successfully" Jun 25 18:44:42.759619 containerd[1450]: time="2024-06-25T18:44:42.759568020Z" level=info msg="StartContainer for \"6abb32e920282d5d337f38434a91f308845e1eea49abc3805874733205e2574b\" returns successfully" Jun 25 18:44:42.777004 containerd[1450]: time="2024-06-25T18:44:42.776958136Z" level=info msg="StartContainer for \"6cc09354505e350c8825f48689e12c686d440696308c5097a20b2b620a5b720c\" returns successfully" Jun 25 18:44:43.122213 kubelet[2212]: I0625 18:44:43.121859 2212 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:43.130929 kubelet[2212]: E0625 18:44:43.130911 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:43.135555 kubelet[2212]: E0625 18:44:43.135468 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:43.136797 kubelet[2212]: E0625 18:44:43.136781 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:44.138486 kubelet[2212]: E0625 18:44:44.138451 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:44.179467 kubelet[2212]: E0625 18:44:44.179417 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:44.700987 kubelet[2212]: E0625 18:44:44.700131 2212 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:44:44.851467 kubelet[2212]: I0625 18:44:44.851400 2212 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:44:45.002083 kubelet[2212]: I0625 18:44:45.001873 2212 apiserver.go:52] "Watching apiserver" Jun 25 18:44:45.007405 kubelet[2212]: I0625 18:44:45.007344 2212 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:47.864784 systemd[1]: Reloading requested from client PID 2504 ('systemctl') (unit session-9.scope)... Jun 25 18:44:47.864828 systemd[1]: Reloading... Jun 25 18:44:48.021098 zram_generator::config[2541]: No configuration found. Jun 25 18:44:48.146342 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:48.241551 systemd[1]: Reloading finished in 376 ms. Jun 25 18:44:48.288800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:48.290356 kubelet[2212]: I0625 18:44:48.289027 2212 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:48.312031 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:48.312329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:48.312402 systemd[1]: kubelet.service: Consumed 1.722s CPU time, 117.4M memory peak, 0B memory swap peak. Jun 25 18:44:48.329726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:48.494239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:48.500313 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:48.554916 kubelet[2586]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:48.554916 kubelet[2586]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:48.554916 kubelet[2586]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:48.555438 kubelet[2586]: I0625 18:44:48.555032 2586 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:48.561555 kubelet[2586]: I0625 18:44:48.561531 2586 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 18:44:48.561555 kubelet[2586]: I0625 18:44:48.561553 2586 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:48.561751 kubelet[2586]: I0625 18:44:48.561738 2586 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 18:44:48.563388 kubelet[2586]: I0625 18:44:48.563364 2586 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:44:48.565757 kubelet[2586]: I0625 18:44:48.565615 2586 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:48.574562 kubelet[2586]: I0625 18:44:48.574519 2586 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:48.574857 kubelet[2586]: I0625 18:44:48.574837 2586 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:48.575090 kubelet[2586]: I0625 18:44:48.575062 2586 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:48.575218 kubelet[2586]: I0625 18:44:48.575107 2586 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:48.575218 kubelet[2586]: I0625 18:44:48.575120 2586 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:48.575218 kubelet[2586]: I0625 18:44:48.575166 2586 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:48.575324 kubelet[2586]: I0625 18:44:48.575271 2586 kubelet.go:396] "Attempting to sync node with API server" Jun 25 18:44:48.575324 kubelet[2586]: I0625 18:44:48.575298 2586 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:48.575386 kubelet[2586]: I0625 18:44:48.575327 2586 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:44:48.575386 kubelet[2586]: I0625 18:44:48.575347 2586 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:48.580542 sudo[2600]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 18:44:48.580922 sudo[2600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 18:44:48.582315 kubelet[2586]: I0625 18:44:48.581183 2586 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:48.582315 kubelet[2586]: I0625 18:44:48.581442 2586 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:44:48.584726 kubelet[2586]: I0625 18:44:48.584697 2586 server.go:1256] "Started kubelet" Jun 25 18:44:48.586067 kubelet[2586]: I0625 18:44:48.586041 2586 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:44:48.586566 kubelet[2586]: I0625 18:44:48.586542 2586 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:48.587224 kubelet[2586]: I0625 18:44:48.587202 2586 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:48.588578 kubelet[2586]: I0625 18:44:48.587988 2586 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:48.589125 kubelet[2586]: I0625 18:44:48.589082 2586 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:48.589384 kubelet[2586]: I0625 18:44:48.589355 2586 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:48.589590 kubelet[2586]: I0625 18:44:48.589562 2586 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:48.591085 kubelet[2586]: I0625 18:44:48.591058 2586 server.go:461] "Adding debug handlers to kubelet server" Jun 25 18:44:48.592199 kubelet[2586]: I0625 18:44:48.591257 2586 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:44:48.592199 kubelet[2586]: I0625 18:44:48.591740 2586 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:44:48.596042 kubelet[2586]: I0625 18:44:48.595671 2586 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:44:48.597979 kubelet[2586]: E0625 18:44:48.597941 2586 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:48.604645 kubelet[2586]: I0625 18:44:48.604613 2586 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:48.606644 kubelet[2586]: I0625 18:44:48.606615 2586 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:48.606703 kubelet[2586]: I0625 18:44:48.606652 2586 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:48.606703 kubelet[2586]: I0625 18:44:48.606674 2586 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 18:44:48.606765 kubelet[2586]: E0625 18:44:48.606732 2586 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:48.637108 kubelet[2586]: I0625 18:44:48.637061 2586 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:48.637108 kubelet[2586]: I0625 18:44:48.637094 2586 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:48.637108 kubelet[2586]: I0625 18:44:48.637113 2586 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:48.637336 kubelet[2586]: I0625 18:44:48.637275 2586 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:44:48.637336 kubelet[2586]: I0625 18:44:48.637306 2586 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:44:48.637336 kubelet[2586]: I0625 18:44:48.637313 2586 policy_none.go:49] "None policy: Start" Jun 25 18:44:48.638130 kubelet[2586]: I0625 18:44:48.637776 2586 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:44:48.638130 kubelet[2586]: I0625 18:44:48.637799 2586 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:48.638130 kubelet[2586]: I0625 18:44:48.637936 2586 state_mem.go:75] "Updated machine memory state" Jun 25 18:44:48.642395 kubelet[2586]: I0625 18:44:48.642371 2586 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:48.643156 kubelet[2586]: I0625 18:44:48.642666 2586 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:48.694110 kubelet[2586]: I0625 18:44:48.694073 2586 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:48.703804 kubelet[2586]: I0625 18:44:48.703759 2586 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 18:44:48.703925 kubelet[2586]: I0625 18:44:48.703845 2586 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:44:48.707093 kubelet[2586]: I0625 18:44:48.707067 2586 topology_manager.go:215] "Topology Admit Handler" podUID="287361fe2e7b285676c0e2255c51e91c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:44:48.707153 kubelet[2586]: I0625 18:44:48.707125 2586 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:44:48.707153 kubelet[2586]: I0625 18:44:48.707151 2586 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:44:48.790624 kubelet[2586]: I0625 18:44:48.790494 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/287361fe2e7b285676c0e2255c51e91c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"287361fe2e7b285676c0e2255c51e91c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:48.790624 kubelet[2586]: I0625 18:44:48.790551 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/287361fe2e7b285676c0e2255c51e91c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"287361fe2e7b285676c0e2255c51e91c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:48.906663 kubelet[2586]: I0625 18:44:48.906610 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/287361fe2e7b285676c0e2255c51e91c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"287361fe2e7b285676c0e2255c51e91c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:48.906663 kubelet[2586]: I0625 18:44:48.906675 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:48.906864 kubelet[2586]: I0625 18:44:48.906708 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:48.906864 kubelet[2586]: I0625 18:44:48.906832 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:48.906946 kubelet[2586]: I0625 18:44:48.906915 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:48.907835 kubelet[2586]: I0625 18:44:48.906937 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:48.907835 kubelet[2586]: I0625 18:44:48.907087 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:44:49.015912 kubelet[2586]: E0625 18:44:49.015825 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:49.016049 kubelet[2586]: E0625 18:44:49.016038 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:49.016493 kubelet[2586]: E0625 18:44:49.016451 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:49.242162 sudo[2600]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:49.577085 kubelet[2586]: I0625 18:44:49.576922 2586 apiserver.go:52] "Watching apiserver" Jun 25 18:44:49.589942 kubelet[2586]: I0625 18:44:49.589890 2586 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:49.619384 kubelet[2586]: E0625 18:44:49.619006 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:49.619384 kubelet[2586]: E0625 18:44:49.619066 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:49.712735 kubelet[2586]: E0625 18:44:49.712042 2586 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:49.712735 kubelet[2586]: E0625 18:44:49.712508 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:49.712735 kubelet[2586]: I0625 18:44:49.712712 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.712679648 podStartE2EDuration="1.712679648s" podCreationTimestamp="2024-06-25 18:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:49.712481754 +0000 UTC m=+1.206210226" watchObservedRunningTime="2024-06-25 18:44:49.712679648 +0000 UTC m=+1.206408110" Jun 25 18:44:49.731293 kubelet[2586]: I0625 18:44:49.731194 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7311340400000002 podStartE2EDuration="1.73113404s" podCreationTimestamp="2024-06-25 18:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:49.722196872 +0000 UTC m=+1.215925334" watchObservedRunningTime="2024-06-25 18:44:49.73113404 +0000 UTC m=+1.224862502" Jun 25 18:44:49.731637 kubelet[2586]: I0625 18:44:49.731464 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.731440939 podStartE2EDuration="1.731440939s" podCreationTimestamp="2024-06-25 18:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:49.730947287 +0000 UTC m=+1.224675769" watchObservedRunningTime="2024-06-25 18:44:49.731440939 +0000 UTC m=+1.225169401" Jun 25 18:44:50.622001 kubelet[2586]: E0625 18:44:50.621964 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:50.740931 sudo[1647]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:50.744852 sshd[1644]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:50.749928 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:52504.service: Deactivated successfully. Jun 25 18:44:50.751764 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:44:50.751957 systemd[1]: session-9.scope: Consumed 5.384s CPU time, 138.8M memory peak, 0B memory swap peak. Jun 25 18:44:50.752569 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:44:50.753556 systemd-logind[1437]: Removed session 9. Jun 25 18:44:51.200620 kubelet[2586]: E0625 18:44:51.200586 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:51.655921 kubelet[2586]: E0625 18:44:51.655769 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:54.535746 kubelet[2586]: E0625 18:44:54.535658 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:54.628771 kubelet[2586]: E0625 18:44:54.628733 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:01.205195 kubelet[2586]: E0625 18:45:01.205071 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:01.666255 kubelet[2586]: E0625 18:45:01.666107 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:01.846922 kubelet[2586]: I0625 18:45:01.846873 2586 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:45:01.847314 containerd[1450]: time="2024-06-25T18:45:01.847262967Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:45:01.847688 kubelet[2586]: I0625 18:45:01.847477 2586 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:45:02.088321 kubelet[2586]: I0625 18:45:02.088139 2586 topology_manager.go:215] "Topology Admit Handler" podUID="03bde489-bb41-404f-93d5-ed3073fbbd0e" podNamespace="kube-system" podName="kube-proxy-pbpbx" Jun 25 18:45:02.088321 kubelet[2586]: I0625 18:45:02.088307 2586 topology_manager.go:215] "Topology Admit Handler" podUID="863cd744-5efa-4cd0-b61f-1b931f4a7b18" podNamespace="kube-system" podName="cilium-9tsck" Jun 25 18:45:02.097322 systemd[1]: Created slice kubepods-burstable-pod863cd744_5efa_4cd0_b61f_1b931f4a7b18.slice - libcontainer container kubepods-burstable-pod863cd744_5efa_4cd0_b61f_1b931f4a7b18.slice. Jun 25 18:45:02.102583 systemd[1]: Created slice kubepods-besteffort-pod03bde489_bb41_404f_93d5_ed3073fbbd0e.slice - libcontainer container kubepods-besteffort-pod03bde489_bb41_404f_93d5_ed3073fbbd0e.slice. Jun 25 18:45:02.174038 kubelet[2586]: I0625 18:45:02.173962 2586 topology_manager.go:215] "Topology Admit Handler" podUID="626e6a75-7799-48fc-9926-fcfa1f22c4de" podNamespace="kube-system" podName="cilium-operator-5cc964979-9kcrs" Jun 25 18:45:02.183319 kubelet[2586]: I0625 18:45:02.183274 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03bde489-bb41-404f-93d5-ed3073fbbd0e-kube-proxy\") pod \"kube-proxy-pbpbx\" (UID: \"03bde489-bb41-404f-93d5-ed3073fbbd0e\") " pod="kube-system/kube-proxy-pbpbx" Jun 25 18:45:02.183319 kubelet[2586]: I0625 18:45:02.183322 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rph2\" (UniqueName: \"kubernetes.io/projected/03bde489-bb41-404f-93d5-ed3073fbbd0e-kube-api-access-7rph2\") pod \"kube-proxy-pbpbx\" (UID: \"03bde489-bb41-404f-93d5-ed3073fbbd0e\") " pod="kube-system/kube-proxy-pbpbx" Jun 25 18:45:02.183465 kubelet[2586]: I0625 18:45:02.183343 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-bpf-maps\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183465 kubelet[2586]: I0625 18:45:02.183361 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-cgroup\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183465 kubelet[2586]: I0625 18:45:02.183379 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03bde489-bb41-404f-93d5-ed3073fbbd0e-xtables-lock\") pod \"kube-proxy-pbpbx\" (UID: \"03bde489-bb41-404f-93d5-ed3073fbbd0e\") " pod="kube-system/kube-proxy-pbpbx" Jun 25 18:45:02.183465 kubelet[2586]: I0625 18:45:02.183395 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cni-path\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183465 kubelet[2586]: I0625 18:45:02.183412 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-lib-modules\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183465 kubelet[2586]: I0625 18:45:02.183429 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-xtables-lock\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183620 kubelet[2586]: I0625 18:45:02.183455 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-config-path\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183620 kubelet[2586]: I0625 18:45:02.183473 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-run\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183620 kubelet[2586]: I0625 18:45:02.183491 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-etc-cni-netd\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183620 kubelet[2586]: I0625 18:45:02.183512 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/863cd744-5efa-4cd0-b61f-1b931f4a7b18-hubble-tls\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183620 kubelet[2586]: I0625 18:45:02.183533 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-host-proc-sys-net\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183620 kubelet[2586]: I0625 18:45:02.183551 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03bde489-bb41-404f-93d5-ed3073fbbd0e-lib-modules\") pod \"kube-proxy-pbpbx\" (UID: \"03bde489-bb41-404f-93d5-ed3073fbbd0e\") " pod="kube-system/kube-proxy-pbpbx" Jun 25 18:45:02.183533 systemd[1]: Created slice kubepods-besteffort-pod626e6a75_7799_48fc_9926_fcfa1f22c4de.slice - libcontainer container kubepods-besteffort-pod626e6a75_7799_48fc_9926_fcfa1f22c4de.slice. Jun 25 18:45:02.183835 kubelet[2586]: I0625 18:45:02.183568 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-hostproc\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183835 kubelet[2586]: I0625 18:45:02.183585 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/863cd744-5efa-4cd0-b61f-1b931f4a7b18-clustermesh-secrets\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183835 kubelet[2586]: I0625 18:45:02.183603 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-host-proc-sys-kernel\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.183835 kubelet[2586]: I0625 18:45:02.183624 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl958\" (UniqueName: \"kubernetes.io/projected/863cd744-5efa-4cd0-b61f-1b931f4a7b18-kube-api-access-sl958\") pod \"cilium-9tsck\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " pod="kube-system/cilium-9tsck" Jun 25 18:45:02.284927 kubelet[2586]: I0625 18:45:02.284295 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5r5b\" (UniqueName: \"kubernetes.io/projected/626e6a75-7799-48fc-9926-fcfa1f22c4de-kube-api-access-z5r5b\") pod \"cilium-operator-5cc964979-9kcrs\" (UID: \"626e6a75-7799-48fc-9926-fcfa1f22c4de\") " pod="kube-system/cilium-operator-5cc964979-9kcrs" Jun 25 18:45:02.284927 kubelet[2586]: I0625 18:45:02.284459 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/626e6a75-7799-48fc-9926-fcfa1f22c4de-cilium-config-path\") pod \"cilium-operator-5cc964979-9kcrs\" (UID: \"626e6a75-7799-48fc-9926-fcfa1f22c4de\") " pod="kube-system/cilium-operator-5cc964979-9kcrs" Jun 25 18:45:02.400860 kubelet[2586]: E0625 18:45:02.400726 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:02.401484 containerd[1450]: time="2024-06-25T18:45:02.401278733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tsck,Uid:863cd744-5efa-4cd0-b61f-1b931f4a7b18,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:02.415247 kubelet[2586]: E0625 18:45:02.415164 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:02.415567 containerd[1450]: time="2024-06-25T18:45:02.415539004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbpbx,Uid:03bde489-bb41-404f-93d5-ed3073fbbd0e,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:02.674969 containerd[1450]: time="2024-06-25T18:45:02.674467468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:02.674969 containerd[1450]: time="2024-06-25T18:45:02.674549373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:02.674969 containerd[1450]: time="2024-06-25T18:45:02.674571123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:02.674969 containerd[1450]: time="2024-06-25T18:45:02.674584078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:02.677126 containerd[1450]: time="2024-06-25T18:45:02.676100250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:02.677126 containerd[1450]: time="2024-06-25T18:45:02.676850320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:02.677126 containerd[1450]: time="2024-06-25T18:45:02.676868704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:02.677126 containerd[1450]: time="2024-06-25T18:45:02.676881228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:02.703426 systemd[1]: Started cri-containerd-a02efb8b07b35ef05792202a958102d4e030dff0234366fc91ef5cb7d842f419.scope - libcontainer container a02efb8b07b35ef05792202a958102d4e030dff0234366fc91ef5cb7d842f419. Jun 25 18:45:02.705745 systemd[1]: Started cri-containerd-b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56.scope - libcontainer container b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56. Jun 25 18:45:02.735418 containerd[1450]: time="2024-06-25T18:45:02.735363044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbpbx,Uid:03bde489-bb41-404f-93d5-ed3073fbbd0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a02efb8b07b35ef05792202a958102d4e030dff0234366fc91ef5cb7d842f419\"" Jun 25 18:45:02.737399 kubelet[2586]: E0625 18:45:02.736654 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:02.739471 containerd[1450]: time="2024-06-25T18:45:02.739404975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tsck,Uid:863cd744-5efa-4cd0-b61f-1b931f4a7b18,Namespace:kube-system,Attempt:0,} returns sandbox id \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\"" Jun 25 18:45:02.740233 kubelet[2586]: E0625 18:45:02.740212 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:02.741854 containerd[1450]: time="2024-06-25T18:45:02.741816591Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 18:45:02.743084 containerd[1450]: time="2024-06-25T18:45:02.743048558Z" level=info msg="CreateContainer within sandbox \"a02efb8b07b35ef05792202a958102d4e030dff0234366fc91ef5cb7d842f419\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:45:02.775084 containerd[1450]: time="2024-06-25T18:45:02.774975287Z" level=info msg="CreateContainer within sandbox \"a02efb8b07b35ef05792202a958102d4e030dff0234366fc91ef5cb7d842f419\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62e3c0cff18fafd51d73dca57b144406e47e444328046fc1927b45d9d09fbb4f\"" Jun 25 18:45:02.775756 containerd[1450]: time="2024-06-25T18:45:02.775684051Z" level=info msg="StartContainer for \"62e3c0cff18fafd51d73dca57b144406e47e444328046fc1927b45d9d09fbb4f\"" Jun 25 18:45:02.786857 kubelet[2586]: E0625 18:45:02.786807 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:02.787501 containerd[1450]: time="2024-06-25T18:45:02.787462423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-9kcrs,Uid:626e6a75-7799-48fc-9926-fcfa1f22c4de,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:02.806489 systemd[1]: Started cri-containerd-62e3c0cff18fafd51d73dca57b144406e47e444328046fc1927b45d9d09fbb4f.scope - libcontainer container 62e3c0cff18fafd51d73dca57b144406e47e444328046fc1927b45d9d09fbb4f. Jun 25 18:45:02.828909 containerd[1450]: time="2024-06-25T18:45:02.828730613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:02.828909 containerd[1450]: time="2024-06-25T18:45:02.828782330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:02.828909 containerd[1450]: time="2024-06-25T18:45:02.828796637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:02.828909 containerd[1450]: time="2024-06-25T18:45:02.828806084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:02.852199 systemd[1]: Started cri-containerd-397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a.scope - libcontainer container 397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a. Jun 25 18:45:02.877498 containerd[1450]: time="2024-06-25T18:45:02.876084648Z" level=info msg="StartContainer for \"62e3c0cff18fafd51d73dca57b144406e47e444328046fc1927b45d9d09fbb4f\" returns successfully" Jun 25 18:45:02.898672 containerd[1450]: time="2024-06-25T18:45:02.898509215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-9kcrs,Uid:626e6a75-7799-48fc-9926-fcfa1f22c4de,Namespace:kube-system,Attempt:0,} returns sandbox id \"397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a\"" Jun 25 18:45:02.899465 kubelet[2586]: E0625 18:45:02.899427 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:03.644330 kubelet[2586]: E0625 18:45:03.644297 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:08.645547 kubelet[2586]: I0625 18:45:08.645429 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pbpbx" podStartSLOduration=7.645388345 podStartE2EDuration="7.645388345s" podCreationTimestamp="2024-06-25 18:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:03.663794413 +0000 UTC m=+15.157522875" watchObservedRunningTime="2024-06-25 18:45:08.645388345 +0000 UTC m=+20.139116807" Jun 25 18:45:11.221613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214691600.mount: Deactivated successfully. Jun 25 18:45:13.989447 containerd[1450]: time="2024-06-25T18:45:13.989366179Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:13.990917 containerd[1450]: time="2024-06-25T18:45:13.990614212Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735363" Jun 25 18:45:13.992713 containerd[1450]: time="2024-06-25T18:45:13.992661246Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:13.994420 containerd[1450]: time="2024-06-25T18:45:13.994349447Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.251847276s" Jun 25 18:45:13.994420 containerd[1450]: time="2024-06-25T18:45:13.994397116Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 25 18:45:13.995198 containerd[1450]: time="2024-06-25T18:45:13.995066353Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 18:45:13.996322 containerd[1450]: time="2024-06-25T18:45:13.996255375Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:45:14.014234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870260907.mount: Deactivated successfully. Jun 25 18:45:14.017313 containerd[1450]: time="2024-06-25T18:45:14.017240751Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\"" Jun 25 18:45:14.018029 containerd[1450]: time="2024-06-25T18:45:14.017979217Z" level=info msg="StartContainer for \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\"" Jun 25 18:45:14.053238 systemd[1]: Started cri-containerd-f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586.scope - libcontainer container f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586. Jun 25 18:45:14.095456 systemd[1]: cri-containerd-f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586.scope: Deactivated successfully. Jun 25 18:45:14.137279 containerd[1450]: time="2024-06-25T18:45:14.137203514Z" level=info msg="StartContainer for \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\" returns successfully" Jun 25 18:45:14.709338 kubelet[2586]: E0625 18:45:14.709303 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:14.721641 containerd[1450]: time="2024-06-25T18:45:14.721336495Z" level=info msg="shim disconnected" id=f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586 namespace=k8s.io Jun 25 18:45:14.721641 containerd[1450]: time="2024-06-25T18:45:14.721400305Z" level=warning msg="cleaning up after shim disconnected" id=f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586 namespace=k8s.io Jun 25 18:45:14.721641 containerd[1450]: time="2024-06-25T18:45:14.721411747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:15.011175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586-rootfs.mount: Deactivated successfully. Jun 25 18:45:15.713304 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:37290.service - OpenSSH per-connection server daemon (10.0.0.1:37290). Jun 25 18:45:15.714109 kubelet[2586]: E0625 18:45:15.713338 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:15.717282 containerd[1450]: time="2024-06-25T18:45:15.717225568Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:45:15.853758 sshd[3040]: Accepted publickey for core from 10.0.0.1 port 37290 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:15.856035 sshd[3040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:15.865085 systemd-logind[1437]: New session 10 of user core. Jun 25 18:45:15.871350 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:45:15.874477 containerd[1450]: time="2024-06-25T18:45:15.874415376Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\"" Jun 25 18:45:15.876279 containerd[1450]: time="2024-06-25T18:45:15.875108739Z" level=info msg="StartContainer for \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\"" Jun 25 18:45:15.910363 systemd[1]: Started cri-containerd-dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f.scope - libcontainer container dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f. Jun 25 18:45:15.959790 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:45:15.960235 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:45:15.960302 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:45:15.967305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:45:15.967536 systemd[1]: cri-containerd-dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f.scope: Deactivated successfully. Jun 25 18:45:15.994607 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:45:16.046244 containerd[1450]: time="2024-06-25T18:45:16.046192855Z" level=info msg="StartContainer for \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\" returns successfully" Jun 25 18:45:16.081901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f-rootfs.mount: Deactivated successfully. Jun 25 18:45:16.117407 containerd[1450]: time="2024-06-25T18:45:16.117324736Z" level=info msg="shim disconnected" id=dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f namespace=k8s.io Jun 25 18:45:16.117407 containerd[1450]: time="2024-06-25T18:45:16.117397102Z" level=warning msg="cleaning up after shim disconnected" id=dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f namespace=k8s.io Jun 25 18:45:16.117407 containerd[1450]: time="2024-06-25T18:45:16.117409375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:16.119249 sshd[3040]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:16.123386 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:37290.service: Deactivated successfully. Jun 25 18:45:16.123856 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:45:16.126720 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:45:16.129100 systemd-logind[1437]: Removed session 10. Jun 25 18:45:16.211196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656627931.mount: Deactivated successfully. Jun 25 18:45:16.716256 kubelet[2586]: E0625 18:45:16.716214 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:16.717928 containerd[1450]: time="2024-06-25T18:45:16.717891039Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:45:17.034483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321027991.mount: Deactivated successfully. Jun 25 18:45:17.044226 containerd[1450]: time="2024-06-25T18:45:17.044155120Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:17.047365 containerd[1450]: time="2024-06-25T18:45:17.047293692Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907213" Jun 25 18:45:17.047886 containerd[1450]: time="2024-06-25T18:45:17.047830369Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\"" Jun 25 18:45:17.048523 containerd[1450]: time="2024-06-25T18:45:17.048487052Z" level=info msg="StartContainer for \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\"" Jun 25 18:45:17.048954 containerd[1450]: time="2024-06-25T18:45:17.048925335Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:17.051258 containerd[1450]: time="2024-06-25T18:45:17.050672144Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.055564875s" Jun 25 18:45:17.051258 containerd[1450]: time="2024-06-25T18:45:17.050714614Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 25 18:45:17.057144 containerd[1450]: time="2024-06-25T18:45:17.057082117Z" level=info msg="CreateContainer within sandbox \"397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 18:45:17.092245 systemd[1]: Started cri-containerd-a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694.scope - libcontainer container a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694. Jun 25 18:45:17.193108 systemd[1]: cri-containerd-a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694.scope: Deactivated successfully. Jun 25 18:45:17.233746 containerd[1450]: time="2024-06-25T18:45:17.233684758Z" level=info msg="StartContainer for \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\" returns successfully" Jun 25 18:45:17.244835 containerd[1450]: time="2024-06-25T18:45:17.244775468Z" level=info msg="CreateContainer within sandbox \"397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\"" Jun 25 18:45:17.245532 containerd[1450]: time="2024-06-25T18:45:17.245504177Z" level=info msg="StartContainer for \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\"" Jun 25 18:45:17.256112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694-rootfs.mount: Deactivated successfully. Jun 25 18:45:17.261928 containerd[1450]: time="2024-06-25T18:45:17.261836325Z" level=info msg="shim disconnected" id=a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694 namespace=k8s.io Jun 25 18:45:17.261928 containerd[1450]: time="2024-06-25T18:45:17.261906357Z" level=warning msg="cleaning up after shim disconnected" id=a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694 namespace=k8s.io Jun 25 18:45:17.261928 containerd[1450]: time="2024-06-25T18:45:17.261914682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:17.274200 systemd[1]: Started cri-containerd-6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b.scope - libcontainer container 6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b. Jun 25 18:45:17.306141 containerd[1450]: time="2024-06-25T18:45:17.305947081Z" level=info msg="StartContainer for \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\" returns successfully" Jun 25 18:45:17.721894 kubelet[2586]: E0625 18:45:17.721860 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:17.727321 kubelet[2586]: E0625 18:45:17.727296 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:17.729300 containerd[1450]: time="2024-06-25T18:45:17.729264530Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:45:17.753658 containerd[1450]: time="2024-06-25T18:45:17.753602488Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\"" Jun 25 18:45:17.754206 containerd[1450]: time="2024-06-25T18:45:17.754185944Z" level=info msg="StartContainer for \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\"" Jun 25 18:45:17.790270 systemd[1]: Started cri-containerd-a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5.scope - libcontainer container a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5. Jun 25 18:45:17.845887 containerd[1450]: time="2024-06-25T18:45:17.845172485Z" level=info msg="StartContainer for \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\" returns successfully" Jun 25 18:45:17.846199 systemd[1]: cri-containerd-a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5.scope: Deactivated successfully. Jun 25 18:45:17.855037 kubelet[2586]: I0625 18:45:17.854982 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-9kcrs" podStartSLOduration=1.7058408859999998 podStartE2EDuration="15.854941033s" podCreationTimestamp="2024-06-25 18:45:02 +0000 UTC" firstStartedPulling="2024-06-25 18:45:02.901874835 +0000 UTC m=+14.395603297" lastFinishedPulling="2024-06-25 18:45:17.050974982 +0000 UTC m=+28.544703444" observedRunningTime="2024-06-25 18:45:17.818729886 +0000 UTC m=+29.312458348" watchObservedRunningTime="2024-06-25 18:45:17.854941033 +0000 UTC m=+29.348669495" Jun 25 18:45:17.918069 containerd[1450]: time="2024-06-25T18:45:17.917947630Z" level=info msg="shim disconnected" id=a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5 namespace=k8s.io Jun 25 18:45:17.918069 containerd[1450]: time="2024-06-25T18:45:17.918039983Z" level=warning msg="cleaning up after shim disconnected" id=a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5 namespace=k8s.io Jun 25 18:45:17.918069 containerd[1450]: time="2024-06-25T18:45:17.918051675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:18.743057 kubelet[2586]: E0625 18:45:18.735757 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:18.743057 kubelet[2586]: E0625 18:45:18.735909 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:18.743648 containerd[1450]: time="2024-06-25T18:45:18.739390074Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:45:18.762919 containerd[1450]: time="2024-06-25T18:45:18.762862303Z" level=info msg="CreateContainer within sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\"" Jun 25 18:45:18.763415 containerd[1450]: time="2024-06-25T18:45:18.763388531Z" level=info msg="StartContainer for \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\"" Jun 25 18:45:18.796195 systemd[1]: Started cri-containerd-497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988.scope - libcontainer container 497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988. Jun 25 18:45:18.827372 containerd[1450]: time="2024-06-25T18:45:18.827321389Z" level=info msg="StartContainer for \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\" returns successfully" Jun 25 18:45:18.926713 kubelet[2586]: I0625 18:45:18.926676 2586 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:45:18.946178 kubelet[2586]: I0625 18:45:18.946133 2586 topology_manager.go:215] "Topology Admit Handler" podUID="3f3c0290-0cc8-470d-a2dd-7bcb07aed29a" podNamespace="kube-system" podName="coredns-76f75df574-ljbgt" Jun 25 18:45:18.949964 kubelet[2586]: I0625 18:45:18.949931 2586 topology_manager.go:215] "Topology Admit Handler" podUID="2a4e9fbb-8b05-4edd-b251-686ac2c44d6c" podNamespace="kube-system" podName="coredns-76f75df574-pb2sp" Jun 25 18:45:18.959341 systemd[1]: Created slice kubepods-burstable-pod3f3c0290_0cc8_470d_a2dd_7bcb07aed29a.slice - libcontainer container kubepods-burstable-pod3f3c0290_0cc8_470d_a2dd_7bcb07aed29a.slice. Jun 25 18:45:18.967355 systemd[1]: Created slice kubepods-burstable-pod2a4e9fbb_8b05_4edd_b251_686ac2c44d6c.slice - libcontainer container kubepods-burstable-pod2a4e9fbb_8b05_4edd_b251_686ac2c44d6c.slice. Jun 25 18:45:18.994642 kubelet[2586]: I0625 18:45:18.994487 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a4e9fbb-8b05-4edd-b251-686ac2c44d6c-config-volume\") pod \"coredns-76f75df574-pb2sp\" (UID: \"2a4e9fbb-8b05-4edd-b251-686ac2c44d6c\") " pod="kube-system/coredns-76f75df574-pb2sp" Jun 25 18:45:18.994642 kubelet[2586]: I0625 18:45:18.994542 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rcj\" (UniqueName: \"kubernetes.io/projected/3f3c0290-0cc8-470d-a2dd-7bcb07aed29a-kube-api-access-z7rcj\") pod \"coredns-76f75df574-ljbgt\" (UID: \"3f3c0290-0cc8-470d-a2dd-7bcb07aed29a\") " pod="kube-system/coredns-76f75df574-ljbgt" Jun 25 18:45:18.994642 kubelet[2586]: I0625 18:45:18.994562 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3c0290-0cc8-470d-a2dd-7bcb07aed29a-config-volume\") pod \"coredns-76f75df574-ljbgt\" (UID: \"3f3c0290-0cc8-470d-a2dd-7bcb07aed29a\") " pod="kube-system/coredns-76f75df574-ljbgt" Jun 25 18:45:18.994642 kubelet[2586]: I0625 18:45:18.994580 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nns24\" (UniqueName: \"kubernetes.io/projected/2a4e9fbb-8b05-4edd-b251-686ac2c44d6c-kube-api-access-nns24\") pod \"coredns-76f75df574-pb2sp\" (UID: \"2a4e9fbb-8b05-4edd-b251-686ac2c44d6c\") " pod="kube-system/coredns-76f75df574-pb2sp" Jun 25 18:45:19.265661 kubelet[2586]: E0625 18:45:19.265540 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:19.267871 containerd[1450]: time="2024-06-25T18:45:19.266220964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljbgt,Uid:3f3c0290-0cc8-470d-a2dd-7bcb07aed29a,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:19.271588 kubelet[2586]: E0625 18:45:19.271537 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:19.272082 containerd[1450]: time="2024-06-25T18:45:19.271982598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pb2sp,Uid:2a4e9fbb-8b05-4edd-b251-686ac2c44d6c,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:19.741166 kubelet[2586]: E0625 18:45:19.741121 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:19.756058 kubelet[2586]: I0625 18:45:19.755440 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9tsck" podStartSLOduration=7.501993871 podStartE2EDuration="18.75539531s" podCreationTimestamp="2024-06-25 18:45:01 +0000 UTC" firstStartedPulling="2024-06-25 18:45:02.741347478 +0000 UTC m=+14.235075940" lastFinishedPulling="2024-06-25 18:45:13.994748917 +0000 UTC m=+25.488477379" observedRunningTime="2024-06-25 18:45:19.755042487 +0000 UTC m=+31.248770949" watchObservedRunningTime="2024-06-25 18:45:19.75539531 +0000 UTC m=+31.249123772" Jun 25 18:45:20.742495 kubelet[2586]: E0625 18:45:20.742434 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:21.133120 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:43960.service - OpenSSH per-connection server daemon (10.0.0.1:43960). Jun 25 18:45:21.175223 sshd[3427]: Accepted publickey for core from 10.0.0.1 port 43960 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:21.177057 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:21.184747 systemd-logind[1437]: New session 11 of user core. Jun 25 18:45:21.194319 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:45:21.253625 systemd-networkd[1389]: cilium_host: Link UP Jun 25 18:45:21.255539 systemd-networkd[1389]: cilium_net: Link UP Jun 25 18:45:21.256373 systemd-networkd[1389]: cilium_net: Gained carrier Jun 25 18:45:21.257292 systemd-networkd[1389]: cilium_host: Gained carrier Jun 25 18:45:21.402255 systemd-networkd[1389]: cilium_vxlan: Link UP Jun 25 18:45:21.402266 systemd-networkd[1389]: cilium_vxlan: Gained carrier Jun 25 18:45:21.497224 systemd-networkd[1389]: cilium_net: Gained IPv6LL Jun 25 18:45:21.648039 sshd[3427]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:21.654085 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:43960.service: Deactivated successfully. Jun 25 18:45:21.657126 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:45:21.658225 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:45:21.659292 systemd-logind[1437]: Removed session 11. Jun 25 18:45:21.693088 kernel: NET: Registered PF_ALG protocol family Jun 25 18:45:21.744623 kubelet[2586]: E0625 18:45:21.744587 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:22.201272 systemd-networkd[1389]: cilium_host: Gained IPv6LL Jun 25 18:45:22.521200 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL Jun 25 18:45:22.534569 systemd-networkd[1389]: lxc_health: Link UP Jun 25 18:45:22.543705 systemd-networkd[1389]: lxc_health: Gained carrier Jun 25 18:45:22.819682 systemd-networkd[1389]: lxc69f557526f8f: Link UP Jun 25 18:45:22.826842 systemd-networkd[1389]: lxc41508a23b29e: Link UP Jun 25 18:45:22.836235 kernel: eth0: renamed from tmp93576 Jun 25 18:45:22.843053 kernel: eth0: renamed from tmpd136e Jun 25 18:45:22.847752 systemd-networkd[1389]: lxc69f557526f8f: Gained carrier Jun 25 18:45:22.849199 systemd-networkd[1389]: lxc41508a23b29e: Gained carrier Jun 25 18:45:23.865227 systemd-networkd[1389]: lxc_health: Gained IPv6LL Jun 25 18:45:24.188110 systemd-networkd[1389]: lxc41508a23b29e: Gained IPv6LL Jun 25 18:45:24.188410 systemd-networkd[1389]: lxc69f557526f8f: Gained IPv6LL Jun 25 18:45:24.403356 kubelet[2586]: E0625 18:45:24.403321 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:26.655769 containerd[1450]: time="2024-06-25T18:45:26.655563126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:26.655769 containerd[1450]: time="2024-06-25T18:45:26.655633748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:26.655769 containerd[1450]: time="2024-06-25T18:45:26.655653495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:26.655769 containerd[1450]: time="2024-06-25T18:45:26.655667010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:26.656391 containerd[1450]: time="2024-06-25T18:45:26.655942919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:26.656391 containerd[1450]: time="2024-06-25T18:45:26.656038478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:26.656391 containerd[1450]: time="2024-06-25T18:45:26.656070869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:26.656391 containerd[1450]: time="2024-06-25T18:45:26.656086919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:26.674373 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:42142.service - OpenSSH per-connection server daemon (10.0.0.1:42142). Jun 25 18:45:26.683099 systemd[1]: Started cri-containerd-93576fb921f69cfd1cf29907be28289d5cae8f02557f2cc2a907544d13f973f0.scope - libcontainer container 93576fb921f69cfd1cf29907be28289d5cae8f02557f2cc2a907544d13f973f0. Jun 25 18:45:26.685264 systemd[1]: Started cri-containerd-d136ed11f872f2e4b203a3e3e12f159b89b8279212f2a94d28e173f65a114223.scope - libcontainer container d136ed11f872f2e4b203a3e3e12f159b89b8279212f2a94d28e173f65a114223. Jun 25 18:45:26.700823 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:45:26.702769 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:45:26.718962 sshd[3876]: Accepted publickey for core from 10.0.0.1 port 42142 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:26.719885 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:26.726259 systemd-logind[1437]: New session 12 of user core. Jun 25 18:45:26.734276 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:45:26.735821 containerd[1450]: time="2024-06-25T18:45:26.735646472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pb2sp,Uid:2a4e9fbb-8b05-4edd-b251-686ac2c44d6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d136ed11f872f2e4b203a3e3e12f159b89b8279212f2a94d28e173f65a114223\"" Jun 25 18:45:26.736237 containerd[1450]: time="2024-06-25T18:45:26.736198817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljbgt,Uid:3f3c0290-0cc8-470d-a2dd-7bcb07aed29a,Namespace:kube-system,Attempt:0,} returns sandbox id \"93576fb921f69cfd1cf29907be28289d5cae8f02557f2cc2a907544d13f973f0\"" Jun 25 18:45:26.736880 kubelet[2586]: E0625 18:45:26.736859 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:26.737439 kubelet[2586]: E0625 18:45:26.737259 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:26.740170 containerd[1450]: time="2024-06-25T18:45:26.739908057Z" level=info msg="CreateContainer within sandbox \"d136ed11f872f2e4b203a3e3e12f159b89b8279212f2a94d28e173f65a114223\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:26.740709 containerd[1450]: time="2024-06-25T18:45:26.740671670Z" level=info msg="CreateContainer within sandbox \"93576fb921f69cfd1cf29907be28289d5cae8f02557f2cc2a907544d13f973f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:26.823406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910686409.mount: Deactivated successfully. Jun 25 18:45:26.830188 containerd[1450]: time="2024-06-25T18:45:26.830122882Z" level=info msg="CreateContainer within sandbox \"d136ed11f872f2e4b203a3e3e12f159b89b8279212f2a94d28e173f65a114223\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66bae0226bf38f4e0f2a8705a91e44075cf83e313cd208610006a3921ed369f5\"" Jun 25 18:45:26.831163 containerd[1450]: time="2024-06-25T18:45:26.831006620Z" level=info msg="StartContainer for \"66bae0226bf38f4e0f2a8705a91e44075cf83e313cd208610006a3921ed369f5\"" Jun 25 18:45:26.843793 containerd[1450]: time="2024-06-25T18:45:26.843635465Z" level=info msg="CreateContainer within sandbox \"93576fb921f69cfd1cf29907be28289d5cae8f02557f2cc2a907544d13f973f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93fa6cb1731cab5eefd3a8d7dfe8bf8d11cbc04c2f3c620148972dbd6a9bd55e\"" Jun 25 18:45:26.845881 containerd[1450]: time="2024-06-25T18:45:26.844645359Z" level=info msg="StartContainer for \"93fa6cb1731cab5eefd3a8d7dfe8bf8d11cbc04c2f3c620148972dbd6a9bd55e\"" Jun 25 18:45:26.883269 systemd[1]: Started cri-containerd-93fa6cb1731cab5eefd3a8d7dfe8bf8d11cbc04c2f3c620148972dbd6a9bd55e.scope - libcontainer container 93fa6cb1731cab5eefd3a8d7dfe8bf8d11cbc04c2f3c620148972dbd6a9bd55e. Jun 25 18:45:26.887600 systemd[1]: Started cri-containerd-66bae0226bf38f4e0f2a8705a91e44075cf83e313cd208610006a3921ed369f5.scope - libcontainer container 66bae0226bf38f4e0f2a8705a91e44075cf83e313cd208610006a3921ed369f5. Jun 25 18:45:26.889790 sshd[3876]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:26.895922 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:42142.service: Deactivated successfully. Jun 25 18:45:26.900117 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:45:26.901221 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:45:26.902352 systemd-logind[1437]: Removed session 12. Jun 25 18:45:26.995159 kubelet[2586]: I0625 18:45:26.995095 2586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:45:26.995975 kubelet[2586]: E0625 18:45:26.995950 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:27.057145 containerd[1450]: time="2024-06-25T18:45:27.057064515Z" level=info msg="StartContainer for \"93fa6cb1731cab5eefd3a8d7dfe8bf8d11cbc04c2f3c620148972dbd6a9bd55e\" returns successfully" Jun 25 18:45:27.057299 containerd[1450]: time="2024-06-25T18:45:27.057190892Z" level=info msg="StartContainer for \"66bae0226bf38f4e0f2a8705a91e44075cf83e313cd208610006a3921ed369f5\" returns successfully" Jun 25 18:45:27.762702 kubelet[2586]: E0625 18:45:27.762665 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:27.765910 kubelet[2586]: E0625 18:45:27.765885 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:27.766209 kubelet[2586]: E0625 18:45:27.766129 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:27.797443 kubelet[2586]: I0625 18:45:27.797299 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pb2sp" podStartSLOduration=25.797257888 podStartE2EDuration="25.797257888s" podCreationTimestamp="2024-06-25 18:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:27.796337811 +0000 UTC m=+39.290066273" watchObservedRunningTime="2024-06-25 18:45:27.797257888 +0000 UTC m=+39.290986350" Jun 25 18:45:28.111977 kubelet[2586]: I0625 18:45:28.111828 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ljbgt" podStartSLOduration=26.111768393 podStartE2EDuration="26.111768393s" podCreationTimestamp="2024-06-25 18:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:27.913266018 +0000 UTC m=+39.406994481" watchObservedRunningTime="2024-06-25 18:45:28.111768393 +0000 UTC m=+39.605496855" Jun 25 18:45:28.767712 kubelet[2586]: E0625 18:45:28.767664 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:28.768139 kubelet[2586]: E0625 18:45:28.767734 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:29.768680 kubelet[2586]: E0625 18:45:29.768609 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:29.768680 kubelet[2586]: E0625 18:45:29.768664 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:31.903066 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:42158.service - OpenSSH per-connection server daemon (10.0.0.1:42158). Jun 25 18:45:31.960177 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 42158 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:31.962699 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:31.969830 systemd-logind[1437]: New session 13 of user core. Jun 25 18:45:31.977385 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:45:32.256539 sshd[4013]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:32.261706 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:42158.service: Deactivated successfully. Jun 25 18:45:32.264699 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:45:32.265975 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:45:32.266984 systemd-logind[1437]: Removed session 13. Jun 25 18:45:37.271259 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:45298.service - OpenSSH per-connection server daemon (10.0.0.1:45298). Jun 25 18:45:37.305200 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 45298 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:37.306848 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:37.310888 systemd-logind[1437]: New session 14 of user core. Jun 25 18:45:37.317143 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:45:37.458312 sshd[4049]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:37.469923 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:45298.service: Deactivated successfully. Jun 25 18:45:37.472884 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:45:37.474764 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:45:37.486526 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:45300.service - OpenSSH per-connection server daemon (10.0.0.1:45300). Jun 25 18:45:37.487831 systemd-logind[1437]: Removed session 14. Jun 25 18:45:37.516533 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 45300 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:37.518680 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:37.523997 systemd-logind[1437]: New session 15 of user core. Jun 25 18:45:37.533316 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:45:37.719372 sshd[4064]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:37.733286 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:45300.service: Deactivated successfully. Jun 25 18:45:37.736593 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:45:37.739190 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:45:37.746491 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:45306.service - OpenSSH per-connection server daemon (10.0.0.1:45306). Jun 25 18:45:37.747574 systemd-logind[1437]: Removed session 15. Jun 25 18:45:37.781171 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 45306 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:37.783235 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:37.788968 systemd-logind[1437]: New session 16 of user core. Jun 25 18:45:37.802282 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:45:37.945955 sshd[4076]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:37.950577 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:45306.service: Deactivated successfully. Jun 25 18:45:37.952923 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:45:37.953749 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:45:37.954995 systemd-logind[1437]: Removed session 16. Jun 25 18:45:42.961750 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:45320.service - OpenSSH per-connection server daemon (10.0.0.1:45320). Jun 25 18:45:42.994837 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 45320 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:42.996423 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:43.000566 systemd-logind[1437]: New session 17 of user core. Jun 25 18:45:43.008136 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:45:43.128105 sshd[4093]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:43.131729 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:45320.service: Deactivated successfully. Jun 25 18:45:43.133743 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:45:43.134381 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:45:43.135239 systemd-logind[1437]: Removed session 17. Jun 25 18:45:48.141199 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:42202.service - OpenSSH per-connection server daemon (10.0.0.1:42202). Jun 25 18:45:48.177074 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 42202 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:48.178755 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:48.182903 systemd-logind[1437]: New session 18 of user core. Jun 25 18:45:48.190199 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:45:48.302277 sshd[4107]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:48.305997 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:42202.service: Deactivated successfully. Jun 25 18:45:48.307982 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:45:48.308629 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:45:48.309660 systemd-logind[1437]: Removed session 18. Jun 25 18:45:53.319843 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:42204.service - OpenSSH per-connection server daemon (10.0.0.1:42204). Jun 25 18:45:53.351740 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 42204 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:53.353678 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:53.358556 systemd-logind[1437]: New session 19 of user core. Jun 25 18:45:53.368241 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:45:53.486445 sshd[4123]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:53.498087 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:42204.service: Deactivated successfully. Jun 25 18:45:53.499930 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:45:53.501744 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:45:53.503240 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:42212.service - OpenSSH per-connection server daemon (10.0.0.1:42212). Jun 25 18:45:53.504025 systemd-logind[1437]: Removed session 19. Jun 25 18:45:53.554982 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 42212 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:53.556633 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:53.560695 systemd-logind[1437]: New session 20 of user core. Jun 25 18:45:53.567148 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:45:53.853265 sshd[4138]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:53.867351 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:42212.service: Deactivated successfully. Jun 25 18:45:53.869447 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:45:53.871193 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:45:53.872530 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:42214.service - OpenSSH per-connection server daemon (10.0.0.1:42214). Jun 25 18:45:53.873408 systemd-logind[1437]: Removed session 20. Jun 25 18:45:53.909339 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 42214 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:53.911079 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:53.916142 systemd-logind[1437]: New session 21 of user core. Jun 25 18:45:53.923202 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:45:56.365342 sshd[4150]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:56.379239 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:42214.service: Deactivated successfully. Jun 25 18:45:56.381273 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:45:56.383151 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:45:56.384615 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:38382.service - OpenSSH per-connection server daemon (10.0.0.1:38382). Jun 25 18:45:56.385478 systemd-logind[1437]: Removed session 21. Jun 25 18:45:56.416449 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 38382 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:56.418235 sshd[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:56.422666 systemd-logind[1437]: New session 22 of user core. Jun 25 18:45:56.429159 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:45:56.711994 sshd[4172]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:56.722445 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:38382.service: Deactivated successfully. Jun 25 18:45:56.724767 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:45:56.726631 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:45:56.738452 systemd[1]: Started sshd@22-10.0.0.136:22-10.0.0.1:38388.service - OpenSSH per-connection server daemon (10.0.0.1:38388). Jun 25 18:45:56.739494 systemd-logind[1437]: Removed session 22. Jun 25 18:45:56.765723 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 38388 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:45:56.767472 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:56.771958 systemd-logind[1437]: New session 23 of user core. Jun 25 18:45:56.783180 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:45:56.917306 sshd[4185]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:56.922142 systemd[1]: sshd@22-10.0.0.136:22-10.0.0.1:38388.service: Deactivated successfully. Jun 25 18:45:56.925131 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:45:56.926163 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:45:56.927276 systemd-logind[1437]: Removed session 23. Jun 25 18:46:01.936000 systemd[1]: Started sshd@23-10.0.0.136:22-10.0.0.1:38390.service - OpenSSH per-connection server daemon (10.0.0.1:38390). Jun 25 18:46:01.968161 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 38390 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:46:01.970106 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:01.975353 systemd-logind[1437]: New session 24 of user core. Jun 25 18:46:01.984210 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:46:02.108879 sshd[4199]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:02.114251 systemd[1]: sshd@23-10.0.0.136:22-10.0.0.1:38390.service: Deactivated successfully. Jun 25 18:46:02.116740 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:46:02.117582 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:46:02.118542 systemd-logind[1437]: Removed session 24. Jun 25 18:46:02.607876 kubelet[2586]: E0625 18:46:02.607812 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:07.121240 systemd[1]: Started sshd@24-10.0.0.136:22-10.0.0.1:49268.service - OpenSSH per-connection server daemon (10.0.0.1:49268). Jun 25 18:46:07.152352 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 49268 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:46:07.153933 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:07.157992 systemd-logind[1437]: New session 25 of user core. Jun 25 18:46:07.170159 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:46:07.285985 sshd[4218]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:07.290334 systemd[1]: sshd@24-10.0.0.136:22-10.0.0.1:49268.service: Deactivated successfully. Jun 25 18:46:07.292508 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:46:07.293200 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:46:07.294185 systemd-logind[1437]: Removed session 25. Jun 25 18:46:12.299129 systemd[1]: Started sshd@25-10.0.0.136:22-10.0.0.1:49300.service - OpenSSH per-connection server daemon (10.0.0.1:49300). Jun 25 18:46:12.332920 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 49300 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:46:12.334834 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:12.339458 systemd-logind[1437]: New session 26 of user core. Jun 25 18:46:12.349175 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:46:12.472148 sshd[4232]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:12.476763 systemd[1]: sshd@25-10.0.0.136:22-10.0.0.1:49300.service: Deactivated successfully. Jun 25 18:46:12.478947 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:46:12.479859 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:46:12.481141 systemd-logind[1437]: Removed session 26. Jun 25 18:46:13.608459 kubelet[2586]: E0625 18:46:13.608388 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:17.487210 systemd[1]: Started sshd@26-10.0.0.136:22-10.0.0.1:47474.service - OpenSSH per-connection server daemon (10.0.0.1:47474). Jun 25 18:46:17.518673 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 47474 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:46:17.520605 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:17.524831 systemd-logind[1437]: New session 27 of user core. Jun 25 18:46:17.542283 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:46:17.607804 kubelet[2586]: E0625 18:46:17.607717 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:17.667239 sshd[4247]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:17.677440 systemd[1]: sshd@26-10.0.0.136:22-10.0.0.1:47474.service: Deactivated successfully. Jun 25 18:46:17.679429 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:46:17.681149 systemd-logind[1437]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:46:17.687354 systemd[1]: Started sshd@27-10.0.0.136:22-10.0.0.1:47476.service - OpenSSH per-connection server daemon (10.0.0.1:47476). Jun 25 18:46:17.688473 systemd-logind[1437]: Removed session 27. Jun 25 18:46:17.720176 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 47476 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:46:17.721917 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:17.726149 systemd-logind[1437]: New session 28 of user core. Jun 25 18:46:17.733213 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:46:19.225649 containerd[1450]: time="2024-06-25T18:46:19.225593136Z" level=info msg="StopContainer for \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\" with timeout 30 (s)" Jun 25 18:46:19.237379 containerd[1450]: time="2024-06-25T18:46:19.236865503Z" level=info msg="Stop container \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\" with signal terminated" Jun 25 18:46:19.252654 systemd[1]: cri-containerd-6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b.scope: Deactivated successfully. Jun 25 18:46:19.265967 containerd[1450]: time="2024-06-25T18:46:19.265919328Z" level=info msg="StopContainer for \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\" with timeout 2 (s)" Jun 25 18:46:19.267100 containerd[1450]: time="2024-06-25T18:46:19.267059596Z" level=info msg="Stop container \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\" with signal terminated" Jun 25 18:46:19.268073 containerd[1450]: time="2024-06-25T18:46:19.268028469Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:46:19.276138 systemd-networkd[1389]: lxc_health: Link DOWN Jun 25 18:46:19.276150 systemd-networkd[1389]: lxc_health: Lost carrier Jun 25 18:46:19.279466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b-rootfs.mount: Deactivated successfully. Jun 25 18:46:19.293727 containerd[1450]: time="2024-06-25T18:46:19.293654165Z" level=info msg="shim disconnected" id=6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b namespace=k8s.io Jun 25 18:46:19.293727 containerd[1450]: time="2024-06-25T18:46:19.293720621Z" level=warning msg="cleaning up after shim disconnected" id=6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b namespace=k8s.io Jun 25 18:46:19.293727 containerd[1450]: time="2024-06-25T18:46:19.293732353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:46:19.300231 systemd[1]: cri-containerd-497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988.scope: Deactivated successfully. Jun 25 18:46:19.300607 systemd[1]: cri-containerd-497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988.scope: Consumed 7.808s CPU time. Jun 25 18:46:19.324202 containerd[1450]: time="2024-06-25T18:46:19.324011897Z" level=info msg="StopContainer for \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\" returns successfully" Jun 25 18:46:19.326452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988-rootfs.mount: Deactivated successfully. Jun 25 18:46:19.329382 containerd[1450]: time="2024-06-25T18:46:19.329336826Z" level=info msg="StopPodSandbox for \"397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a\"" Jun 25 18:46:19.331721 containerd[1450]: time="2024-06-25T18:46:19.329401289Z" level=info msg="Container to stop \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:46:19.333534 containerd[1450]: time="2024-06-25T18:46:19.332874784Z" level=info msg="shim disconnected" id=497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988 namespace=k8s.io Jun 25 18:46:19.333534 containerd[1450]: time="2024-06-25T18:46:19.333454561Z" level=warning msg="cleaning up after shim disconnected" id=497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988 namespace=k8s.io Jun 25 18:46:19.333534 containerd[1450]: time="2024-06-25T18:46:19.333464590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:46:19.333955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a-shm.mount: Deactivated successfully. Jun 25 18:46:19.341410 systemd[1]: cri-containerd-397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a.scope: Deactivated successfully. Jun 25 18:46:19.357166 containerd[1450]: time="2024-06-25T18:46:19.357115327Z" level=info msg="StopContainer for \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\" returns successfully" Jun 25 18:46:19.357618 containerd[1450]: time="2024-06-25T18:46:19.357584886Z" level=info msg="StopPodSandbox for \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\"" Jun 25 18:46:19.357760 containerd[1450]: time="2024-06-25T18:46:19.357619241Z" level=info msg="Container to stop \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:46:19.357760 containerd[1450]: time="2024-06-25T18:46:19.357656662Z" level=info msg="Container to stop \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:46:19.357760 containerd[1450]: time="2024-06-25T18:46:19.357671901Z" level=info msg="Container to stop \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:46:19.357760 containerd[1450]: time="2024-06-25T18:46:19.357709242Z" level=info msg="Container to stop \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:46:19.357760 containerd[1450]: time="2024-06-25T18:46:19.357719781Z" level=info msg="Container to stop \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:46:19.360650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56-shm.mount: Deactivated successfully. Jun 25 18:46:19.365735 systemd[1]: cri-containerd-b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56.scope: Deactivated successfully. Jun 25 18:46:19.390916 containerd[1450]: time="2024-06-25T18:46:19.390706581Z" level=info msg="shim disconnected" id=397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a namespace=k8s.io Jun 25 18:46:19.390916 containerd[1450]: time="2024-06-25T18:46:19.390762747Z" level=warning msg="cleaning up after shim disconnected" id=397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a namespace=k8s.io Jun 25 18:46:19.390916 containerd[1450]: time="2024-06-25T18:46:19.390771674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:46:19.399767 containerd[1450]: time="2024-06-25T18:46:19.399625683Z" level=info msg="shim disconnected" id=b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56 namespace=k8s.io Jun 25 18:46:19.399767 containerd[1450]: time="2024-06-25T18:46:19.399698751Z" level=warning msg="cleaning up after shim disconnected" id=b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56 namespace=k8s.io Jun 25 18:46:19.399767 containerd[1450]: time="2024-06-25T18:46:19.399707186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:46:19.408888 containerd[1450]: time="2024-06-25T18:46:19.408814906Z" level=info msg="TearDown network for sandbox \"397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a\" successfully" Jun 25 18:46:19.408888 containerd[1450]: time="2024-06-25T18:46:19.408863769Z" level=info msg="StopPodSandbox for \"397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a\" returns successfully" Jun 25 18:46:19.429366 containerd[1450]: time="2024-06-25T18:46:19.429301855Z" level=info msg="TearDown network for sandbox \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" successfully" Jun 25 18:46:19.429366 containerd[1450]: time="2024-06-25T18:46:19.429355457Z" level=info msg="StopPodSandbox for \"b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56\" returns successfully" Jun 25 18:46:19.452201 kubelet[2586]: I0625 18:46:19.452128 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-lib-modules\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.452201 kubelet[2586]: I0625 18:46:19.452201 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-xtables-lock\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.452771 kubelet[2586]: I0625 18:46:19.452226 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-run\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.452771 kubelet[2586]: I0625 18:46:19.452255 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-host-proc-sys-net\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.452771 kubelet[2586]: I0625 18:46:19.452280 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-hostproc\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.452771 kubelet[2586]: I0625 18:46:19.452282 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.452771 kubelet[2586]: I0625 18:46:19.452318 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/626e6a75-7799-48fc-9926-fcfa1f22c4de-cilium-config-path\") pod \"626e6a75-7799-48fc-9926-fcfa1f22c4de\" (UID: \"626e6a75-7799-48fc-9926-fcfa1f22c4de\") " Jun 25 18:46:19.452771 kubelet[2586]: I0625 18:46:19.452343 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-host-proc-sys-kernel\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.452948 kubelet[2586]: I0625 18:46:19.452349 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.452948 kubelet[2586]: I0625 18:46:19.452369 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-bpf-maps\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.452948 kubelet[2586]: I0625 18:46:19.452384 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.452948 kubelet[2586]: I0625 18:46:19.452393 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-etc-cni-netd\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.452948 kubelet[2586]: I0625 18:46:19.452411 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.453137 kubelet[2586]: I0625 18:46:19.452423 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/863cd744-5efa-4cd0-b61f-1b931f4a7b18-clustermesh-secrets\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.453137 kubelet[2586]: I0625 18:46:19.452432 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.453137 kubelet[2586]: I0625 18:46:19.452451 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl958\" (UniqueName: \"kubernetes.io/projected/863cd744-5efa-4cd0-b61f-1b931f4a7b18-kube-api-access-sl958\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.453137 kubelet[2586]: I0625 18:46:19.452456 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-hostproc" (OuterVolumeSpecName: "hostproc") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.453137 kubelet[2586]: I0625 18:46:19.452476 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-cgroup\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.453289 kubelet[2586]: I0625 18:46:19.452500 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cni-path\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.453289 kubelet[2586]: I0625 18:46:19.452526 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-config-path\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.453289 kubelet[2586]: I0625 18:46:19.452552 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5r5b\" (UniqueName: \"kubernetes.io/projected/626e6a75-7799-48fc-9926-fcfa1f22c4de-kube-api-access-z5r5b\") pod \"626e6a75-7799-48fc-9926-fcfa1f22c4de\" (UID: \"626e6a75-7799-48fc-9926-fcfa1f22c4de\") " Jun 25 18:46:19.453289 kubelet[2586]: I0625 18:46:19.452576 2586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/863cd744-5efa-4cd0-b61f-1b931f4a7b18-hubble-tls\") pod \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\" (UID: \"863cd744-5efa-4cd0-b61f-1b931f4a7b18\") " Jun 25 18:46:19.453289 kubelet[2586]: I0625 18:46:19.452612 2586 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.453289 kubelet[2586]: I0625 18:46:19.452628 2586 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.453289 kubelet[2586]: I0625 18:46:19.452641 2586 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.453511 kubelet[2586]: I0625 18:46:19.452655 2586 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.453511 kubelet[2586]: I0625 18:46:19.452668 2586 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.453511 kubelet[2586]: I0625 18:46:19.452683 2586 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.455949 kubelet[2586]: I0625 18:46:19.455908 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cni-path" (OuterVolumeSpecName: "cni-path") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.455949 kubelet[2586]: I0625 18:46:19.455948 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.456982 kubelet[2586]: I0625 18:46:19.456964 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.457327 kubelet[2586]: I0625 18:46:19.457065 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:46:19.457384 kubelet[2586]: I0625 18:46:19.457337 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863cd744-5efa-4cd0-b61f-1b931f4a7b18-kube-api-access-sl958" (OuterVolumeSpecName: "kube-api-access-sl958") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "kube-api-access-sl958". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:46:19.457505 kubelet[2586]: I0625 18:46:19.457478 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863cd744-5efa-4cd0-b61f-1b931f4a7b18-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:46:19.459681 kubelet[2586]: I0625 18:46:19.459650 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626e6a75-7799-48fc-9926-fcfa1f22c4de-kube-api-access-z5r5b" (OuterVolumeSpecName: "kube-api-access-z5r5b") pod "626e6a75-7799-48fc-9926-fcfa1f22c4de" (UID: "626e6a75-7799-48fc-9926-fcfa1f22c4de"). InnerVolumeSpecName "kube-api-access-z5r5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:46:19.459952 kubelet[2586]: I0625 18:46:19.459931 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863cd744-5efa-4cd0-b61f-1b931f4a7b18-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:46:19.460123 kubelet[2586]: I0625 18:46:19.460098 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626e6a75-7799-48fc-9926-fcfa1f22c4de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "626e6a75-7799-48fc-9926-fcfa1f22c4de" (UID: "626e6a75-7799-48fc-9926-fcfa1f22c4de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:46:19.460698 kubelet[2586]: I0625 18:46:19.460667 2586 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "863cd744-5efa-4cd0-b61f-1b931f4a7b18" (UID: "863cd744-5efa-4cd0-b61f-1b931f4a7b18"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:46:19.553434 kubelet[2586]: I0625 18:46:19.553287 2586 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553434 kubelet[2586]: I0625 18:46:19.553331 2586 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553434 kubelet[2586]: I0625 18:46:19.553348 2586 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553434 kubelet[2586]: I0625 18:46:19.553364 2586 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553434 kubelet[2586]: I0625 18:46:19.553374 2586 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/863cd744-5efa-4cd0-b61f-1b931f4a7b18-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553434 kubelet[2586]: I0625 18:46:19.553387 2586 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sl958\" (UniqueName: \"kubernetes.io/projected/863cd744-5efa-4cd0-b61f-1b931f4a7b18-kube-api-access-sl958\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553434 kubelet[2586]: I0625 18:46:19.553399 2586 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/863cd744-5efa-4cd0-b61f-1b931f4a7b18-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553434 kubelet[2586]: I0625 18:46:19.553413 2586 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z5r5b\" (UniqueName: \"kubernetes.io/projected/626e6a75-7799-48fc-9926-fcfa1f22c4de-kube-api-access-z5r5b\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553795 kubelet[2586]: I0625 18:46:19.553426 2586 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/863cd744-5efa-4cd0-b61f-1b931f4a7b18-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.553795 kubelet[2586]: I0625 18:46:19.553442 2586 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/626e6a75-7799-48fc-9926-fcfa1f22c4de-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 25 18:46:19.876628 kubelet[2586]: I0625 18:46:19.876506 2586 scope.go:117] "RemoveContainer" containerID="6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b" Jun 25 18:46:19.880992 containerd[1450]: time="2024-06-25T18:46:19.880448936Z" level=info msg="RemoveContainer for \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\"" Jun 25 18:46:19.884870 systemd[1]: Removed slice kubepods-besteffort-pod626e6a75_7799_48fc_9926_fcfa1f22c4de.slice - libcontainer container kubepods-besteffort-pod626e6a75_7799_48fc_9926_fcfa1f22c4de.slice. Jun 25 18:46:19.888959 containerd[1450]: time="2024-06-25T18:46:19.888898470Z" level=info msg="RemoveContainer for \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\" returns successfully" Jun 25 18:46:19.889675 kubelet[2586]: I0625 18:46:19.889640 2586 scope.go:117] "RemoveContainer" containerID="6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b" Jun 25 18:46:19.889962 containerd[1450]: time="2024-06-25T18:46:19.889905456Z" level=error msg="ContainerStatus for \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\": not found" Jun 25 18:46:19.891547 systemd[1]: Removed slice kubepods-burstable-pod863cd744_5efa_4cd0_b61f_1b931f4a7b18.slice - libcontainer container kubepods-burstable-pod863cd744_5efa_4cd0_b61f_1b931f4a7b18.slice. Jun 25 18:46:19.891945 systemd[1]: kubepods-burstable-pod863cd744_5efa_4cd0_b61f_1b931f4a7b18.slice: Consumed 7.920s CPU time. Jun 25 18:46:19.897311 kubelet[2586]: E0625 18:46:19.897277 2586 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\": not found" containerID="6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b" Jun 25 18:46:19.897406 kubelet[2586]: I0625 18:46:19.897371 2586 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b"} err="failed to get container status \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6984cc80091dc2befd183b4043f8db6447ae25c043ea00ed4863d1d44da4ae8b\": not found" Jun 25 18:46:19.897406 kubelet[2586]: I0625 18:46:19.897386 2586 scope.go:117] "RemoveContainer" containerID="497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988" Jun 25 18:46:19.898690 containerd[1450]: time="2024-06-25T18:46:19.898654326Z" level=info msg="RemoveContainer for \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\"" Jun 25 18:46:19.906834 containerd[1450]: time="2024-06-25T18:46:19.906749499Z" level=info msg="RemoveContainer for \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\" returns successfully" Jun 25 18:46:19.907154 kubelet[2586]: I0625 18:46:19.907048 2586 scope.go:117] "RemoveContainer" containerID="a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5" Jun 25 18:46:19.908493 containerd[1450]: time="2024-06-25T18:46:19.908439708Z" level=info msg="RemoveContainer for \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\"" Jun 25 18:46:19.913029 containerd[1450]: time="2024-06-25T18:46:19.912957139Z" level=info msg="RemoveContainer for \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\" returns successfully" Jun 25 18:46:19.913293 kubelet[2586]: I0625 18:46:19.913198 2586 scope.go:117] "RemoveContainer" containerID="a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694" Jun 25 18:46:19.914706 containerd[1450]: time="2024-06-25T18:46:19.914654051Z" level=info msg="RemoveContainer for \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\"" Jun 25 18:46:19.918557 containerd[1450]: time="2024-06-25T18:46:19.918531801Z" level=info msg="RemoveContainer for \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\" returns successfully" Jun 25 18:46:19.918746 kubelet[2586]: I0625 18:46:19.918713 2586 scope.go:117] "RemoveContainer" containerID="dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f" Jun 25 18:46:19.919626 containerd[1450]: time="2024-06-25T18:46:19.919602498Z" level=info msg="RemoveContainer for \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\"" Jun 25 18:46:19.922811 containerd[1450]: time="2024-06-25T18:46:19.922768170Z" level=info msg="RemoveContainer for \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\" returns successfully" Jun 25 18:46:19.922970 kubelet[2586]: I0625 18:46:19.922942 2586 scope.go:117] "RemoveContainer" containerID="f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586" Jun 25 18:46:19.923980 containerd[1450]: time="2024-06-25T18:46:19.923767602Z" level=info msg="RemoveContainer for \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\"" Jun 25 18:46:19.933852 containerd[1450]: time="2024-06-25T18:46:19.933794211Z" level=info msg="RemoveContainer for \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\" returns successfully" Jun 25 18:46:19.934117 kubelet[2586]: I0625 18:46:19.934084 2586 scope.go:117] "RemoveContainer" containerID="497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988" Jun 25 18:46:19.934326 containerd[1450]: time="2024-06-25T18:46:19.934290260Z" level=error msg="ContainerStatus for \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\": not found" Jun 25 18:46:19.934545 kubelet[2586]: E0625 18:46:19.934495 2586 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\": not found" containerID="497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988" Jun 25 18:46:19.934613 kubelet[2586]: I0625 18:46:19.934557 2586 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988"} err="failed to get container status \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\": rpc error: code = NotFound desc = an error occurred when try to find container \"497ea189c5797a7bce0603d876e94c9264a88e7c94461864790401e0c8975988\": not found" Jun 25 18:46:19.934613 kubelet[2586]: I0625 18:46:19.934580 2586 scope.go:117] "RemoveContainer" containerID="a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5" Jun 25 18:46:19.934793 containerd[1450]: time="2024-06-25T18:46:19.934753035Z" level=error msg="ContainerStatus for \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\": not found" Jun 25 18:46:19.934913 kubelet[2586]: E0625 18:46:19.934888 2586 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\": not found" containerID="a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5" Jun 25 18:46:19.934946 kubelet[2586]: I0625 18:46:19.934924 2586 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5"} err="failed to get container status \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a65f0fb350cee854194f34234d48d95d3aafc504a6f0a256e447388228d443f5\": not found" Jun 25 18:46:19.934946 kubelet[2586]: I0625 18:46:19.934934 2586 scope.go:117] "RemoveContainer" containerID="a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694" Jun 25 18:46:19.935193 containerd[1450]: time="2024-06-25T18:46:19.935142142Z" level=error msg="ContainerStatus for \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\": not found" Jun 25 18:46:19.935295 kubelet[2586]: E0625 18:46:19.935274 2586 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\": not found" containerID="a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694" Jun 25 18:46:19.935323 kubelet[2586]: I0625 18:46:19.935308 2586 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694"} err="failed to get container status \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9de5d29218988bb881bf5d9082d82015a45dd2924464439f8b75461139c8694\": not found" Jun 25 18:46:19.935350 kubelet[2586]: I0625 18:46:19.935323 2586 scope.go:117] "RemoveContainer" containerID="dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f" Jun 25 18:46:19.935548 containerd[1450]: time="2024-06-25T18:46:19.935484249Z" level=error msg="ContainerStatus for \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\": not found" Jun 25 18:46:19.935622 kubelet[2586]: E0625 18:46:19.935602 2586 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\": not found" containerID="dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f" Jun 25 18:46:19.935652 kubelet[2586]: I0625 18:46:19.935626 2586 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f"} err="failed to get container status \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd8d4f06c77c4121f752901572f2a372a1f70b1a92fccda2c7215c8310cdf04f\": not found" Jun 25 18:46:19.935652 kubelet[2586]: I0625 18:46:19.935635 2586 scope.go:117] "RemoveContainer" containerID="f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586" Jun 25 18:46:19.935807 containerd[1450]: time="2024-06-25T18:46:19.935771894Z" level=error msg="ContainerStatus for \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\": not found" Jun 25 18:46:19.935951 kubelet[2586]: E0625 18:46:19.935924 2586 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\": not found" containerID="f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586" Jun 25 18:46:19.935951 kubelet[2586]: I0625 18:46:19.935951 2586 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586"} err="failed to get container status \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\": rpc error: code = NotFound desc = an error occurred when try to find container \"f87821239e24df2fca636bb193340db999e9522705de648d56ab044d541ed586\": not found" Jun 25 18:46:20.238528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-397a28be68c1dd8085e46c2fd247fba7a1fe3a0241be75a5744a35a48e38d94a-rootfs.mount: Deactivated successfully. Jun 25 18:46:20.238658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b072a4a3d734e637683c083c26c7da25eda1a2913f7b7bd50faae50508375b56-rootfs.mount: Deactivated successfully. Jun 25 18:46:20.238753 systemd[1]: var-lib-kubelet-pods-626e6a75\x2d7799\x2d48fc\x2d9926\x2dfcfa1f22c4de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5r5b.mount: Deactivated successfully. Jun 25 18:46:20.238880 systemd[1]: var-lib-kubelet-pods-863cd744\x2d5efa\x2d4cd0\x2db61f\x2d1b931f4a7b18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsl958.mount: Deactivated successfully. Jun 25 18:46:20.238986 systemd[1]: var-lib-kubelet-pods-863cd744\x2d5efa\x2d4cd0\x2db61f\x2d1b931f4a7b18-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 18:46:20.239103 systemd[1]: var-lib-kubelet-pods-863cd744\x2d5efa\x2d4cd0\x2db61f\x2d1b931f4a7b18-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 18:46:20.610866 kubelet[2586]: I0625 18:46:20.610741 2586 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="626e6a75-7799-48fc-9926-fcfa1f22c4de" path="/var/lib/kubelet/pods/626e6a75-7799-48fc-9926-fcfa1f22c4de/volumes" Jun 25 18:46:20.611378 kubelet[2586]: I0625 18:46:20.611359 2586 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="863cd744-5efa-4cd0-b61f-1b931f4a7b18" path="/var/lib/kubelet/pods/863cd744-5efa-4cd0-b61f-1b931f4a7b18/volumes" Jun 25 18:46:21.192227 sshd[4261]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:21.204872 systemd[1]: sshd@27-10.0.0.136:22-10.0.0.1:47476.service: Deactivated successfully. Jun 25 18:46:21.207658 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:46:21.209627 systemd-logind[1437]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:46:21.215455 systemd[1]: Started sshd@28-10.0.0.136:22-10.0.0.1:47584.service - OpenSSH per-connection server daemon (10.0.0.1:47584). Jun 25 18:46:21.216471 systemd-logind[1437]: Removed session 28. Jun 25 18:46:21.242224 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 47584 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:46:21.243839 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:21.248382 systemd-logind[1437]: New session 29 of user core. Jun 25 18:46:21.258159 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 18:46:21.764223 sshd[4425]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:21.778541 systemd[1]: sshd@28-10.0.0.136:22-10.0.0.1:47584.service: Deactivated successfully. Jun 25 18:46:21.783867 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 18:46:21.786464 kubelet[2586]: I0625 18:46:21.786422 2586 topology_manager.go:215] "Topology Admit Handler" podUID="e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9" podNamespace="kube-system" podName="cilium-92p8t" Jun 25 18:46:21.787080 systemd-logind[1437]: Session 29 logged out. Waiting for processes to exit. Jun 25 18:46:21.788455 kubelet[2586]: E0625 18:46:21.788403 2586 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="863cd744-5efa-4cd0-b61f-1b931f4a7b18" containerName="mount-bpf-fs" Jun 25 18:46:21.788529 kubelet[2586]: E0625 18:46:21.788483 2586 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="863cd744-5efa-4cd0-b61f-1b931f4a7b18" containerName="clean-cilium-state" Jun 25 18:46:21.788529 kubelet[2586]: E0625 18:46:21.788497 2586 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="863cd744-5efa-4cd0-b61f-1b931f4a7b18" containerName="mount-cgroup" Jun 25 18:46:21.788529 kubelet[2586]: E0625 18:46:21.788505 2586 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="863cd744-5efa-4cd0-b61f-1b931f4a7b18" containerName="apply-sysctl-overwrites" Jun 25 18:46:21.788529 kubelet[2586]: E0625 18:46:21.788513 2586 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="626e6a75-7799-48fc-9926-fcfa1f22c4de" containerName="cilium-operator" Jun 25 18:46:21.788529 kubelet[2586]: E0625 18:46:21.788520 2586 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="863cd744-5efa-4cd0-b61f-1b931f4a7b18" containerName="cilium-agent" Jun 25 18:46:21.788707 kubelet[2586]: I0625 18:46:21.788584 2586 memory_manager.go:354] "RemoveStaleState removing state" podUID="863cd744-5efa-4cd0-b61f-1b931f4a7b18" containerName="cilium-agent" Jun 25 18:46:21.788707 kubelet[2586]: I0625 18:46:21.788595 2586 memory_manager.go:354] "RemoveStaleState removing state" podUID="626e6a75-7799-48fc-9926-fcfa1f22c4de" containerName="cilium-operator" Jun 25 18:46:21.795390 kubelet[2586]: W0625 18:46:21.795343 2586 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 25 18:46:21.795390 kubelet[2586]: E0625 18:46:21.795395 2586 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 25 18:46:21.795607 kubelet[2586]: W0625 18:46:21.795433 2586 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 25 18:46:21.795607 kubelet[2586]: E0625 18:46:21.795442 2586 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 25 18:46:21.803543 systemd[1]: Started sshd@29-10.0.0.136:22-10.0.0.1:47588.service - OpenSSH per-connection server daemon (10.0.0.1:47588). Jun 25 18:46:21.808300 systemd-logind[1437]: Removed session 29. Jun 25 18:46:21.814136 systemd[1]: Created slice kubepods-burstable-pode10b7b2d_8b2e_4f86_87e6_9cf1fcb194b9.slice - libcontainer container kubepods-burstable-pode10b7b2d_8b2e_4f86_87e6_9cf1fcb194b9.slice. Jun 25 18:46:21.854609 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 47588 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:46:21.857397 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:21.862538 systemd-logind[1437]: New session 30 of user core. Jun 25 18:46:21.869210 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 18:46:21.926431 sshd[4440]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:21.945458 systemd[1]: sshd@29-10.0.0.136:22-10.0.0.1:47588.service: Deactivated successfully. Jun 25 18:46:21.947603 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 18:46:21.949361 systemd-logind[1437]: Session 30 logged out. Waiting for processes to exit. Jun 25 18:46:21.954595 systemd[1]: Started sshd@30-10.0.0.136:22-10.0.0.1:47598.service - OpenSSH per-connection server daemon (10.0.0.1:47598). Jun 25 18:46:21.955940 systemd-logind[1437]: Removed session 30. Jun 25 18:46:21.965273 kubelet[2586]: I0625 18:46:21.965228 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-etc-cni-netd\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.965419 kubelet[2586]: I0625 18:46:21.965288 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-xtables-lock\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.965419 kubelet[2586]: I0625 18:46:21.965317 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-hubble-tls\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.965419 kubelet[2586]: I0625 18:46:21.965340 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-cilium-cgroup\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.965419 kubelet[2586]: I0625 18:46:21.965362 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-cilium-config-path\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.965419 kubelet[2586]: I0625 18:46:21.965395 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-cilium-run\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.965419 kubelet[2586]: I0625 18:46:21.965420 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-clustermesh-secrets\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.967750 kubelet[2586]: I0625 18:46:21.965447 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-host-proc-sys-net\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.967750 kubelet[2586]: I0625 18:46:21.965466 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-hostproc\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.967750 kubelet[2586]: I0625 18:46:21.965488 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-cni-path\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.967750 kubelet[2586]: I0625 18:46:21.965563 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5tlc\" (UniqueName: \"kubernetes.io/projected/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-kube-api-access-n5tlc\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.967750 kubelet[2586]: I0625 18:46:21.965593 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-lib-modules\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.967750 kubelet[2586]: I0625 18:46:21.965615 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-cilium-ipsec-secrets\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.967973 kubelet[2586]: I0625 18:46:21.965666 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-host-proc-sys-kernel\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.967973 kubelet[2586]: I0625 18:46:21.966202 2586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-bpf-maps\") pod \"cilium-92p8t\" (UID: \"e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9\") " pod="kube-system/cilium-92p8t" Jun 25 18:46:21.988087 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 47598 ssh2: RSA SHA256:VbxkEvWvAYXm+csql9vHz/Q507SQa+IyrfABNJIeiWA Jun 25 18:46:21.989960 sshd[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:21.994150 systemd-logind[1437]: New session 31 of user core. Jun 25 18:46:22.006305 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 25 18:46:22.259077 update_engine[1438]: I0625 18:46:22.259008 1438 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 25 18:46:22.259077 update_engine[1438]: I0625 18:46:22.259073 1438 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 25 18:46:22.259477 update_engine[1438]: I0625 18:46:22.259276 1438 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 25 18:46:22.259820 update_engine[1438]: I0625 18:46:22.259799 1438 omaha_request_params.cc:62] Current group set to alpha Jun 25 18:46:22.260238 update_engine[1438]: I0625 18:46:22.260221 1438 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 25 18:46:22.260238 update_engine[1438]: I0625 18:46:22.260231 1438 update_attempter.cc:643] Scheduling an action processor start. Jun 25 18:46:22.260294 update_engine[1438]: I0625 18:46:22.260246 1438 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 18:46:22.260294 update_engine[1438]: I0625 18:46:22.260277 1438 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 25 18:46:22.260392 update_engine[1438]: I0625 18:46:22.260349 1438 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 18:46:22.260392 update_engine[1438]: I0625 18:46:22.260362 1438 omaha_request_action.cc:272] Request: Jun 25 18:46:22.260392 update_engine[1438]: Jun 25 18:46:22.260392 update_engine[1438]: Jun 25 18:46:22.260392 update_engine[1438]: Jun 25 18:46:22.260392 update_engine[1438]: Jun 25 18:46:22.260392 update_engine[1438]: Jun 25 18:46:22.260392 update_engine[1438]: Jun 25 18:46:22.260392 update_engine[1438]: Jun 25 18:46:22.260392 update_engine[1438]: Jun 25 18:46:22.260392 update_engine[1438]: I0625 18:46:22.260368 1438 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:46:22.261928 locksmithd[1468]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 25 18:46:22.264144 update_engine[1438]: I0625 18:46:22.264115 1438 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:46:22.264411 update_engine[1438]: I0625 18:46:22.264380 1438 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:46:22.272145 update_engine[1438]: E0625 18:46:22.272106 1438 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:46:22.272196 update_engine[1438]: I0625 18:46:22.272171 1438 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 25 18:46:23.071697 kubelet[2586]: E0625 18:46:23.071641 2586 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jun 25 18:46:23.072227 kubelet[2586]: E0625 18:46:23.071766 2586 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-cilium-ipsec-secrets podName:e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9 nodeName:}" failed. No retries permitted until 2024-06-25 18:46:23.571732837 +0000 UTC m=+95.065461299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9-cilium-ipsec-secrets") pod "cilium-92p8t" (UID: "e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9") : failed to sync secret cache: timed out waiting for the condition Jun 25 18:46:23.617930 kubelet[2586]: E0625 18:46:23.617876 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:23.618524 containerd[1450]: time="2024-06-25T18:46:23.618474159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-92p8t,Uid:e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9,Namespace:kube-system,Attempt:0,}" Jun 25 18:46:23.668250 kubelet[2586]: E0625 18:46:23.668219 2586 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:46:23.947872 containerd[1450]: time="2024-06-25T18:46:23.947722324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:46:23.947872 containerd[1450]: time="2024-06-25T18:46:23.947803828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:46:23.947872 containerd[1450]: time="2024-06-25T18:46:23.947825359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:46:23.947872 containerd[1450]: time="2024-06-25T18:46:23.947839866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:46:23.973162 systemd[1]: Started cri-containerd-c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717.scope - libcontainer container c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717. Jun 25 18:46:23.997945 containerd[1450]: time="2024-06-25T18:46:23.997897538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-92p8t,Uid:e10b7b2d-8b2e-4f86-87e6-9cf1fcb194b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\"" Jun 25 18:46:23.998535 kubelet[2586]: E0625 18:46:23.998498 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:24.000162 containerd[1450]: time="2024-06-25T18:46:24.000066961Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:46:24.256114 containerd[1450]: time="2024-06-25T18:46:24.255956299Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9fae899d67dfde1a854d00ab160118e7cb1abc2f428967369f01d2005f71f73b\"" Jun 25 18:46:24.256613 containerd[1450]: time="2024-06-25T18:46:24.256589627Z" level=info msg="StartContainer for \"9fae899d67dfde1a854d00ab160118e7cb1abc2f428967369f01d2005f71f73b\"" Jun 25 18:46:24.288146 systemd[1]: Started cri-containerd-9fae899d67dfde1a854d00ab160118e7cb1abc2f428967369f01d2005f71f73b.scope - libcontainer container 9fae899d67dfde1a854d00ab160118e7cb1abc2f428967369f01d2005f71f73b. Jun 25 18:46:24.338554 systemd[1]: cri-containerd-9fae899d67dfde1a854d00ab160118e7cb1abc2f428967369f01d2005f71f73b.scope: Deactivated successfully. Jun 25 18:46:24.357506 containerd[1450]: time="2024-06-25T18:46:24.357427033Z" level=info msg="StartContainer for \"9fae899d67dfde1a854d00ab160118e7cb1abc2f428967369f01d2005f71f73b\" returns successfully" Jun 25 18:46:24.441602 containerd[1450]: time="2024-06-25T18:46:24.441519562Z" level=info msg="shim disconnected" id=9fae899d67dfde1a854d00ab160118e7cb1abc2f428967369f01d2005f71f73b namespace=k8s.io Jun 25 18:46:24.441602 containerd[1450]: time="2024-06-25T18:46:24.441598152Z" level=warning msg="cleaning up after shim disconnected" id=9fae899d67dfde1a854d00ab160118e7cb1abc2f428967369f01d2005f71f73b namespace=k8s.io Jun 25 18:46:24.441602 containerd[1450]: time="2024-06-25T18:46:24.441610435Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:46:24.583128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155939432.mount: Deactivated successfully. Jun 25 18:46:24.894278 kubelet[2586]: E0625 18:46:24.894164 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:24.896601 containerd[1450]: time="2024-06-25T18:46:24.896501970Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:46:24.910293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506392380.mount: Deactivated successfully. Jun 25 18:46:24.916664 containerd[1450]: time="2024-06-25T18:46:24.916599438Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07\"" Jun 25 18:46:24.917318 containerd[1450]: time="2024-06-25T18:46:24.917248194Z" level=info msg="StartContainer for \"72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07\"" Jun 25 18:46:24.952228 systemd[1]: Started cri-containerd-72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07.scope - libcontainer container 72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07. Jun 25 18:46:24.980536 containerd[1450]: time="2024-06-25T18:46:24.980477860Z" level=info msg="StartContainer for \"72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07\" returns successfully" Jun 25 18:46:24.987123 systemd[1]: cri-containerd-72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07.scope: Deactivated successfully. Jun 25 18:46:25.011506 containerd[1450]: time="2024-06-25T18:46:25.011435661Z" level=info msg="shim disconnected" id=72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07 namespace=k8s.io Jun 25 18:46:25.011506 containerd[1450]: time="2024-06-25T18:46:25.011500513Z" level=warning msg="cleaning up after shim disconnected" id=72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07 namespace=k8s.io Jun 25 18:46:25.011506 containerd[1450]: time="2024-06-25T18:46:25.011510653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:46:25.582240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72a1d7c5b5e9ad24d56dbcf2af5fa70841ede7f88b0d0ab49b778ef9e5ceef07-rootfs.mount: Deactivated successfully. Jun 25 18:46:25.898303 kubelet[2586]: E0625 18:46:25.898166 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:25.900867 containerd[1450]: time="2024-06-25T18:46:25.900823053Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:46:25.939114 containerd[1450]: time="2024-06-25T18:46:25.938986653Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76\"" Jun 25 18:46:25.939795 containerd[1450]: time="2024-06-25T18:46:25.939747091Z" level=info msg="StartContainer for \"3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76\"" Jun 25 18:46:25.976238 systemd[1]: Started cri-containerd-3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76.scope - libcontainer container 3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76. Jun 25 18:46:26.010714 containerd[1450]: time="2024-06-25T18:46:26.010646736Z" level=info msg="StartContainer for \"3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76\" returns successfully" Jun 25 18:46:26.011667 systemd[1]: cri-containerd-3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76.scope: Deactivated successfully. Jun 25 18:46:26.040215 containerd[1450]: time="2024-06-25T18:46:26.040156725Z" level=info msg="shim disconnected" id=3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76 namespace=k8s.io Jun 25 18:46:26.040215 containerd[1450]: time="2024-06-25T18:46:26.040208673Z" level=warning msg="cleaning up after shim disconnected" id=3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76 namespace=k8s.io Jun 25 18:46:26.040215 containerd[1450]: time="2024-06-25T18:46:26.040219423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:46:26.582355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b94586cc2434c9f0cb32c270e375aed25aed72fc75d181219e324291a17dc76-rootfs.mount: Deactivated successfully. Jun 25 18:46:26.900711 kubelet[2586]: E0625 18:46:26.900590 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:26.902419 containerd[1450]: time="2024-06-25T18:46:26.902366038Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:46:26.917343 containerd[1450]: time="2024-06-25T18:46:26.917277979Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b\"" Jun 25 18:46:26.917833 containerd[1450]: time="2024-06-25T18:46:26.917808161Z" level=info msg="StartContainer for \"23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b\"" Jun 25 18:46:26.953582 systemd[1]: Started cri-containerd-23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b.scope - libcontainer container 23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b. Jun 25 18:46:26.980253 systemd[1]: cri-containerd-23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b.scope: Deactivated successfully. Jun 25 18:46:26.986865 containerd[1450]: time="2024-06-25T18:46:26.986824196Z" level=info msg="StartContainer for \"23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b\" returns successfully" Jun 25 18:46:27.122358 containerd[1450]: time="2024-06-25T18:46:27.122289279Z" level=info msg="shim disconnected" id=23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b namespace=k8s.io Jun 25 18:46:27.122358 containerd[1450]: time="2024-06-25T18:46:27.122353490Z" level=warning msg="cleaning up after shim disconnected" id=23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b namespace=k8s.io Jun 25 18:46:27.122358 containerd[1450]: time="2024-06-25T18:46:27.122362196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:46:27.582193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23964cd3ca0b0367c3efacef00502d5c198c68f9a00dafc277a8fd599ff6715b-rootfs.mount: Deactivated successfully. Jun 25 18:46:27.904227 kubelet[2586]: E0625 18:46:27.904102 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:27.906769 containerd[1450]: time="2024-06-25T18:46:27.906724820Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:46:28.012174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913966934.mount: Deactivated successfully. Jun 25 18:46:28.155203 containerd[1450]: time="2024-06-25T18:46:28.155001743Z" level=info msg="CreateContainer within sandbox \"c63f68a488296d2ccfb4d94b6c4f85c5ede9100a7cd8ffdec0cacab05ef2a717\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6fca44afdc92ce4af8b45d2df5155e6e948327d5f21a6a986084e505d284e251\"" Jun 25 18:46:28.155977 containerd[1450]: time="2024-06-25T18:46:28.155941409Z" level=info msg="StartContainer for \"6fca44afdc92ce4af8b45d2df5155e6e948327d5f21a6a986084e505d284e251\"" Jun 25 18:46:28.188162 systemd[1]: Started cri-containerd-6fca44afdc92ce4af8b45d2df5155e6e948327d5f21a6a986084e505d284e251.scope - libcontainer container 6fca44afdc92ce4af8b45d2df5155e6e948327d5f21a6a986084e505d284e251. Jun 25 18:46:28.265984 containerd[1450]: time="2024-06-25T18:46:28.265927726Z" level=info msg="StartContainer for \"6fca44afdc92ce4af8b45d2df5155e6e948327d5f21a6a986084e505d284e251\" returns successfully" Jun 25 18:46:28.582599 systemd[1]: run-containerd-runc-k8s.io-6fca44afdc92ce4af8b45d2df5155e6e948327d5f21a6a986084e505d284e251-runc.W5PuGX.mount: Deactivated successfully. Jun 25 18:46:28.652098 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 25 18:46:28.909999 kubelet[2586]: E0625 18:46:28.909871 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:29.911849 kubelet[2586]: E0625 18:46:29.911798 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:31.608085 kubelet[2586]: E0625 18:46:31.608048 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:31.755634 systemd-networkd[1389]: lxc_health: Link UP Jun 25 18:46:31.763351 systemd-networkd[1389]: lxc_health: Gained carrier Jun 25 18:46:32.262130 update_engine[1438]: I0625 18:46:32.262065 1438 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:46:32.262609 update_engine[1438]: I0625 18:46:32.262439 1438 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:46:32.262724 update_engine[1438]: I0625 18:46:32.262678 1438 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:46:32.269394 update_engine[1438]: E0625 18:46:32.269363 1438 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:46:32.269460 update_engine[1438]: I0625 18:46:32.269405 1438 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 25 18:46:33.619782 kubelet[2586]: E0625 18:46:33.619752 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:33.631605 kubelet[2586]: I0625 18:46:33.631369 2586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-92p8t" podStartSLOduration=12.631315056 podStartE2EDuration="12.631315056s" podCreationTimestamp="2024-06-25 18:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:46:28.924518806 +0000 UTC m=+100.418247269" watchObservedRunningTime="2024-06-25 18:46:33.631315056 +0000 UTC m=+105.125043518" Jun 25 18:46:33.689293 systemd-networkd[1389]: lxc_health: Gained IPv6LL Jun 25 18:46:33.918796 kubelet[2586]: E0625 18:46:33.918650 2586 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:38.854938 systemd[1]: run-containerd-runc-k8s.io-6fca44afdc92ce4af8b45d2df5155e6e948327d5f21a6a986084e505d284e251-runc.e0FJNE.mount: Deactivated successfully. Jun 25 18:46:38.931740 sshd[4448]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:38.936912 systemd[1]: sshd@30-10.0.0.136:22-10.0.0.1:47598.service: Deactivated successfully. Jun 25 18:46:38.939725 systemd[1]: session-31.scope: Deactivated successfully. Jun 25 18:46:38.940674 systemd-logind[1437]: Session 31 logged out. Waiting for processes to exit. Jun 25 18:46:38.941583 systemd-logind[1437]: Removed session 31.