Jul 2 00:12:56.918564 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:12:56.918590 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:12:56.918604 kernel: BIOS-provided physical RAM map: Jul 2 00:12:56.918613 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:12:56.918621 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:12:56.918630 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:12:56.918640 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jul 2 00:12:56.918648 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jul 2 00:12:56.918657 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:12:56.918669 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:12:56.918677 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 2 00:12:56.918686 kernel: NX (Execute Disable) protection: active Jul 2 00:12:56.918694 kernel: APIC: Static calls initialized Jul 2 00:12:56.918703 kernel: SMBIOS 2.8 present. Jul 2 00:12:56.918714 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 2 00:12:56.918726 kernel: Hypervisor detected: KVM Jul 2 00:12:56.918736 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:12:56.918745 kernel: kvm-clock: using sched offset of 2255243938 cycles Jul 2 00:12:56.918755 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:12:56.918765 kernel: tsc: Detected 2794.746 MHz processor Jul 2 00:12:56.918775 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:12:56.918785 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:12:56.918794 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jul 2 00:12:56.918816 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:12:56.918828 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:12:56.918838 kernel: Using GB pages for direct mapping Jul 2 00:12:56.918848 kernel: ACPI: Early table checksum verification disabled Jul 2 00:12:56.918858 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jul 2 00:12:56.918867 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:12:56.918877 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:12:56.918887 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:12:56.918896 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 2 00:12:56.918906 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:12:56.918918 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:12:56.918928 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:12:56.918937 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jul 2 00:12:56.918947 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jul 2 00:12:56.918956 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 2 00:12:56.918966 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jul 2 00:12:56.918975 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jul 2 00:12:56.918989 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jul 2 00:12:56.919002 kernel: No NUMA configuration found Jul 2 00:12:56.919012 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jul 2 00:12:56.919022 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jul 2 00:12:56.919032 kernel: Zone ranges: Jul 2 00:12:56.919042 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:12:56.919053 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jul 2 00:12:56.919066 kernel: Normal empty Jul 2 00:12:56.919078 kernel: Movable zone start for each node Jul 2 00:12:56.919089 kernel: Early memory node ranges Jul 2 00:12:56.919101 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:12:56.919111 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jul 2 00:12:56.919121 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jul 2 00:12:56.919131 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:12:56.919141 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:12:56.919151 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jul 2 00:12:56.919165 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:12:56.919175 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:12:56.919185 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:12:56.919195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:12:56.919205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:12:56.919215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:12:56.919244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:12:56.919255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:12:56.919264 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:12:56.919286 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:12:56.919300 kernel: TSC deadline timer available Jul 2 00:12:56.919309 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 00:12:56.919319 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:12:56.919328 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 00:12:56.919337 kernel: kvm-guest: setup PV sched yield Jul 2 00:12:56.919347 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jul 2 00:12:56.919357 kernel: Booting paravirtualized kernel on KVM Jul 2 00:12:56.919367 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:12:56.919377 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 00:12:56.919390 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jul 2 00:12:56.919401 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jul 2 00:12:56.919410 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 00:12:56.919420 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:12:56.919430 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:12:56.919442 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:12:56.919453 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:12:56.919463 kernel: random: crng init done Jul 2 00:12:56.919475 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:12:56.919486 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:12:56.919496 kernel: Fallback order for Node 0: 0 Jul 2 00:12:56.919506 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jul 2 00:12:56.919516 kernel: Policy zone: DMA32 Jul 2 00:12:56.919526 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:12:56.919537 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 143044K reserved, 0K cma-reserved) Jul 2 00:12:56.919547 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:12:56.919557 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:12:56.919570 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:12:56.919580 kernel: Dynamic Preempt: voluntary Jul 2 00:12:56.919590 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:12:56.919601 kernel: rcu: RCU event tracing is enabled. Jul 2 00:12:56.919611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:12:56.919621 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:12:56.919631 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:12:56.919641 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:12:56.919651 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:12:56.919664 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:12:56.919674 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 00:12:56.919684 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:12:56.919694 kernel: Console: colour VGA+ 80x25 Jul 2 00:12:56.919704 kernel: printk: console [ttyS0] enabled Jul 2 00:12:56.919714 kernel: ACPI: Core revision 20230628 Jul 2 00:12:56.919724 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:12:56.919734 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:12:56.919744 kernel: x2apic enabled Jul 2 00:12:56.919754 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:12:56.919767 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 2 00:12:56.919777 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 2 00:12:56.919787 kernel: kvm-guest: setup PV IPIs Jul 2 00:12:56.919797 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:12:56.919818 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:12:56.919828 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 2 00:12:56.919839 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 00:12:56.919861 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 00:12:56.919872 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 00:12:56.919882 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:12:56.919892 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:12:56.919906 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:12:56.919916 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:12:56.919927 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 00:12:56.919937 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 00:12:56.919948 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:12:56.919961 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:12:56.919972 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 2 00:12:56.919983 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 2 00:12:56.919994 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 2 00:12:56.920005 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:12:56.920015 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:12:56.920025 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:12:56.920036 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:12:56.920049 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 00:12:56.920060 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:12:56.920070 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:12:56.920081 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:12:56.920091 kernel: SELinux: Initializing. Jul 2 00:12:56.920101 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:12:56.920112 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:12:56.920123 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 00:12:56.920133 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:12:56.920147 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:12:56.920157 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:12:56.920168 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 00:12:56.920178 kernel: ... version: 0 Jul 2 00:12:56.920189 kernel: ... bit width: 48 Jul 2 00:12:56.920199 kernel: ... generic registers: 6 Jul 2 00:12:56.920210 kernel: ... value mask: 0000ffffffffffff Jul 2 00:12:56.920221 kernel: ... max period: 00007fffffffffff Jul 2 00:12:56.920245 kernel: ... fixed-purpose events: 0 Jul 2 00:12:56.920259 kernel: ... event mask: 000000000000003f Jul 2 00:12:56.920269 kernel: signal: max sigframe size: 1776 Jul 2 00:12:56.920280 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:12:56.920290 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:12:56.920301 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:12:56.920311 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:12:56.920322 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 00:12:56.920332 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:12:56.920343 kernel: smpboot: Max logical packages: 1 Jul 2 00:12:56.920356 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 2 00:12:56.920366 kernel: devtmpfs: initialized Jul 2 00:12:56.920377 kernel: x86/mm: Memory block size: 128MB Jul 2 00:12:56.920387 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:12:56.920398 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:12:56.920409 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:12:56.920419 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:12:56.920429 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:12:56.920440 kernel: audit: type=2000 audit(1719879176.215:1): state=initialized audit_enabled=0 res=1 Jul 2 00:12:56.920453 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:12:56.920464 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:12:56.920474 kernel: cpuidle: using governor menu Jul 2 00:12:56.920484 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:12:56.920495 kernel: dca service started, version 1.12.1 Jul 2 00:12:56.920505 kernel: PCI: Using configuration type 1 for base access Jul 2 00:12:56.920516 kernel: PCI: Using configuration type 1 for extended access Jul 2 00:12:56.920526 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:12:56.920537 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:12:56.920550 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:12:56.920560 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:12:56.920571 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:12:56.920581 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:12:56.920592 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:12:56.920602 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:12:56.920613 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:12:56.920624 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:12:56.920634 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:12:56.920647 kernel: ACPI: Interpreter enabled Jul 2 00:12:56.920658 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:12:56.920668 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:12:56.920679 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:12:56.920689 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:12:56.920699 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:12:56.920710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:12:56.920941 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:12:56.920964 kernel: acpiphp: Slot [3] registered Jul 2 00:12:56.920975 kernel: acpiphp: Slot [4] registered Jul 2 00:12:56.920985 kernel: acpiphp: Slot [5] registered Jul 2 00:12:56.920996 kernel: acpiphp: Slot [6] registered Jul 2 00:12:56.921006 kernel: acpiphp: Slot [7] registered Jul 2 00:12:56.921018 kernel: acpiphp: Slot [8] registered Jul 2 00:12:56.921031 kernel: acpiphp: Slot [9] registered Jul 2 00:12:56.921041 kernel: acpiphp: Slot [10] registered Jul 2 00:12:56.921051 kernel: acpiphp: Slot [11] registered Jul 2 00:12:56.921062 kernel: acpiphp: Slot [12] registered Jul 2 00:12:56.921075 kernel: acpiphp: Slot [13] registered Jul 2 00:12:56.921086 kernel: acpiphp: Slot [14] registered Jul 2 00:12:56.921096 kernel: acpiphp: Slot [15] registered Jul 2 00:12:56.921106 kernel: acpiphp: Slot [16] registered Jul 2 00:12:56.921117 kernel: acpiphp: Slot [17] registered Jul 2 00:12:56.921127 kernel: acpiphp: Slot [18] registered Jul 2 00:12:56.921137 kernel: acpiphp: Slot [19] registered Jul 2 00:12:56.921147 kernel: acpiphp: Slot [20] registered Jul 2 00:12:56.921158 kernel: acpiphp: Slot [21] registered Jul 2 00:12:56.921171 kernel: acpiphp: Slot [22] registered Jul 2 00:12:56.921181 kernel: acpiphp: Slot [23] registered Jul 2 00:12:56.921192 kernel: acpiphp: Slot [24] registered Jul 2 00:12:56.921202 kernel: acpiphp: Slot [25] registered Jul 2 00:12:56.921212 kernel: acpiphp: Slot [26] registered Jul 2 00:12:56.921222 kernel: acpiphp: Slot [27] registered Jul 2 00:12:56.921246 kernel: acpiphp: Slot [28] registered Jul 2 00:12:56.921271 kernel: acpiphp: Slot [29] registered Jul 2 00:12:56.921281 kernel: acpiphp: Slot [30] registered Jul 2 00:12:56.921291 kernel: acpiphp: Slot [31] registered Jul 2 00:12:56.921306 kernel: PCI host bridge to bus 0000:00 Jul 2 00:12:56.921478 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:12:56.921621 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:12:56.921760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:12:56.921913 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 00:12:56.922053 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:12:56.922198 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:12:56.922394 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:12:56.922562 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:12:56.922728 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:12:56.922894 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 00:12:56.923054 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:12:56.923207 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:12:56.923400 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:12:56.923588 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:12:56.923753 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:12:56.923919 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:12:56.924074 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:12:56.924262 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 00:12:56.924419 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 2 00:12:56.924577 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 2 00:12:56.924731 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 2 00:12:56.924901 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:12:56.925067 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:12:56.925222 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 00:12:56.925459 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 2 00:12:56.925662 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 2 00:12:56.925842 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:12:56.925998 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:12:56.926155 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 2 00:12:56.926325 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 2 00:12:56.926498 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:12:56.926654 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 00:12:56.926892 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 2 00:12:56.927090 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 2 00:12:56.927348 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 2 00:12:56.927366 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:12:56.927377 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:12:56.927388 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:12:56.927398 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:12:56.927409 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:12:56.927419 kernel: iommu: Default domain type: Translated Jul 2 00:12:56.927435 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:12:56.927445 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:12:56.927456 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:12:56.927466 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:12:56.927477 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jul 2 00:12:56.927632 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:12:56.927785 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:12:56.927952 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:12:56.927971 kernel: vgaarb: loaded Jul 2 00:12:56.927982 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:12:56.927993 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:12:56.928004 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:12:56.928014 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:12:56.928025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:12:56.928036 kernel: pnp: PnP ACPI init Jul 2 00:12:56.928199 kernel: pnp 00:02: [dma 2] Jul 2 00:12:56.928218 kernel: pnp: PnP ACPI: found 6 devices Jul 2 00:12:56.928243 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:12:56.928254 kernel: NET: Registered PF_INET protocol family Jul 2 00:12:56.928265 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:12:56.928276 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:12:56.928287 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:12:56.928298 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:12:56.928308 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:12:56.928319 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:12:56.928333 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:12:56.928344 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:12:56.928354 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:12:56.928365 kernel: NET: Registered PF_XDP protocol family Jul 2 00:12:56.928510 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:12:56.928696 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:12:56.928849 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:12:56.928989 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 00:12:56.929129 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:12:56.929303 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:12:56.929458 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:12:56.929473 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:12:56.929484 kernel: Initialise system trusted keyrings Jul 2 00:12:56.929495 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:12:56.929505 kernel: Key type asymmetric registered Jul 2 00:12:56.929533 kernel: Asymmetric key parser 'x509' registered Jul 2 00:12:56.929552 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:12:56.929568 kernel: io scheduler mq-deadline registered Jul 2 00:12:56.929578 kernel: io scheduler kyber registered Jul 2 00:12:56.929589 kernel: io scheduler bfq registered Jul 2 00:12:56.929599 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:12:56.929614 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:12:56.929625 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 00:12:56.929635 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:12:56.929646 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:12:56.929657 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:12:56.929671 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:12:56.929681 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:12:56.929692 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:12:56.929861 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 00:12:56.929878 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:12:56.930017 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 00:12:56.930168 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T00:12:56 UTC (1719879176) Jul 2 00:12:56.930396 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 00:12:56.930417 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:12:56.930428 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:12:56.930438 kernel: Segment Routing with IPv6 Jul 2 00:12:56.930449 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:12:56.930459 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:12:56.930470 kernel: Key type dns_resolver registered Jul 2 00:12:56.930480 kernel: IPI shorthand broadcast: enabled Jul 2 00:12:56.930491 kernel: sched_clock: Marking stable (663003538, 104661038)->(818956280, -51291704) Jul 2 00:12:56.930501 kernel: registered taskstats version 1 Jul 2 00:12:56.930514 kernel: Loading compiled-in X.509 certificates Jul 2 00:12:56.930525 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:12:56.930535 kernel: Key type .fscrypt registered Jul 2 00:12:56.930546 kernel: Key type fscrypt-provisioning registered Jul 2 00:12:56.930557 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:12:56.930567 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:12:56.930578 kernel: ima: No architecture policies found Jul 2 00:12:56.930588 kernel: clk: Disabling unused clocks Jul 2 00:12:56.930599 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:12:56.930612 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:12:56.930623 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:12:56.930633 kernel: Run /init as init process Jul 2 00:12:56.930644 kernel: with arguments: Jul 2 00:12:56.930654 kernel: /init Jul 2 00:12:56.930664 kernel: with environment: Jul 2 00:12:56.930675 kernel: HOME=/ Jul 2 00:12:56.930705 kernel: TERM=linux Jul 2 00:12:56.930718 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:12:56.930735 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:12:56.930748 systemd[1]: Detected virtualization kvm. Jul 2 00:12:56.930760 systemd[1]: Detected architecture x86-64. Jul 2 00:12:56.930771 systemd[1]: Running in initrd. Jul 2 00:12:56.930783 systemd[1]: No hostname configured, using default hostname. Jul 2 00:12:56.930794 systemd[1]: Hostname set to . Jul 2 00:12:56.930819 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:12:56.930831 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:12:56.930843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:12:56.930854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:12:56.930867 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:12:56.930878 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:12:56.930890 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:12:56.930902 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:12:56.930919 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:12:56.930931 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:12:56.930943 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:12:56.930954 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:12:56.930966 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:12:56.930977 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:12:56.930989 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:12:56.931003 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:12:56.931015 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:12:56.931026 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:12:56.931038 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:12:56.931050 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:12:56.931062 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:12:56.931073 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:12:56.931085 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:12:56.931097 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:12:56.931112 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:12:56.931124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:12:56.931135 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:12:56.931147 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:12:56.931158 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:12:56.931173 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:12:56.931185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:12:56.931196 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:12:56.931208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:12:56.931220 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:12:56.931245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:12:56.931283 systemd-journald[193]: Collecting audit messages is disabled. Jul 2 00:12:56.931310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:12:56.931321 systemd-journald[193]: Journal started Jul 2 00:12:56.931349 systemd-journald[193]: Runtime Journal (/run/log/journal/101bdeb0d1dd4465b40d16380a2df770) is 6.0M, max 48.4M, 42.3M free. Jul 2 00:12:56.927220 systemd-modules-load[194]: Inserted module 'overlay' Jul 2 00:12:56.959608 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:12:56.961371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:12:56.965252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:12:56.967913 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 2 00:12:56.968832 kernel: Bridge firewalling registered Jul 2 00:12:56.969364 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:12:56.970603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:12:56.974921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:12:56.975310 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:12:56.977093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:12:56.988468 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:12:56.991054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:12:56.991324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:12:57.001379 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:12:57.001647 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:12:57.006027 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:12:57.023390 dracut-cmdline[233]: dracut-dracut-053 Jul 2 00:12:57.026950 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:12:57.034437 systemd-resolved[224]: Positive Trust Anchors: Jul 2 00:12:57.034455 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:12:57.034495 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:12:57.037013 systemd-resolved[224]: Defaulting to hostname 'linux'. Jul 2 00:12:57.037994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:12:57.044166 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:12:57.133274 kernel: SCSI subsystem initialized Jul 2 00:12:57.144254 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:12:57.156269 kernel: iscsi: registered transport (tcp) Jul 2 00:12:57.183263 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:12:57.183338 kernel: QLogic iSCSI HBA Driver Jul 2 00:12:57.234688 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:12:57.243463 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:12:57.270249 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:12:57.270279 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:12:57.273261 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:12:57.328283 kernel: raid6: avx2x4 gen() 18423 MB/s Jul 2 00:12:57.345254 kernel: raid6: avx2x2 gen() 17815 MB/s Jul 2 00:12:57.362682 kernel: raid6: avx2x1 gen() 14822 MB/s Jul 2 00:12:57.362707 kernel: raid6: using algorithm avx2x4 gen() 18423 MB/s Jul 2 00:12:57.380725 kernel: raid6: .... xor() 5352 MB/s, rmw enabled Jul 2 00:12:57.380779 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:12:57.415264 kernel: xor: automatically using best checksumming function avx Jul 2 00:12:57.609258 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:12:57.622249 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:12:57.631461 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:12:57.643994 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jul 2 00:12:57.648448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:12:57.658616 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:12:57.671197 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jul 2 00:12:57.702320 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:12:57.715375 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:12:57.781248 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:12:57.792385 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:12:57.806256 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 00:12:57.820477 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:12:57.820970 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:12:57.820984 kernel: GPT:9289727 != 19775487 Jul 2 00:12:57.820994 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:12:57.821005 kernel: GPT:9289727 != 19775487 Jul 2 00:12:57.821015 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:12:57.821031 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:12:57.814011 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:12:57.818756 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:12:57.820746 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:12:57.826587 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:12:57.840210 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:12:57.846279 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:12:57.856560 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:12:57.864254 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (476) Jul 2 00:12:57.866312 kernel: libata version 3.00 loaded. Jul 2 00:12:57.866222 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:12:57.872653 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Jul 2 00:12:57.872698 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:12:57.894089 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:12:57.894106 kernel: AES CTR mode by8 optimization enabled Jul 2 00:12:57.894116 kernel: scsi host0: ata_piix Jul 2 00:12:57.894359 kernel: scsi host1: ata_piix Jul 2 00:12:57.894516 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 00:12:57.894527 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 00:12:57.889151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:12:57.898360 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:12:57.899647 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:12:57.909491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:12:57.919406 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:12:57.920564 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:12:57.920638 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:12:57.923573 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:12:57.925448 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:12:57.925518 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:12:57.928325 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:12:57.935323 disk-uuid[540]: Primary Header is updated. Jul 2 00:12:57.935323 disk-uuid[540]: Secondary Entries is updated. Jul 2 00:12:57.935323 disk-uuid[540]: Secondary Header is updated. Jul 2 00:12:57.940916 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:12:57.937416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:12:58.008473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:12:58.017465 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:12:58.033060 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:12:58.053249 kernel: ata2: found unknown device (class 0) Jul 2 00:12:58.055246 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 00:12:58.057284 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 00:12:58.105729 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 00:12:58.118602 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:12:58.118628 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 00:12:58.957834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:12:58.957902 disk-uuid[542]: The operation has completed successfully. Jul 2 00:12:59.003820 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:12:59.003951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:12:59.024407 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:12:59.028475 sh[585]: Success Jul 2 00:12:59.043247 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 00:12:59.077025 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:12:59.087817 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:12:59.091578 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:12:59.105924 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:12:59.105964 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:12:59.105975 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:12:59.107173 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:12:59.108091 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:12:59.113363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:12:59.115282 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:12:59.128491 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:12:59.130320 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:12:59.144710 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:12:59.144770 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:12:59.144782 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:12:59.148251 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:12:59.157255 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:12:59.159114 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:12:59.173626 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:12:59.181386 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:12:59.238665 ignition[691]: Ignition 2.18.0 Jul 2 00:12:59.238677 ignition[691]: Stage: fetch-offline Jul 2 00:12:59.238720 ignition[691]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:12:59.238730 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:12:59.238939 ignition[691]: parsed url from cmdline: "" Jul 2 00:12:59.238943 ignition[691]: no config URL provided Jul 2 00:12:59.238948 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:12:59.238957 ignition[691]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:12:59.238988 ignition[691]: op(1): [started] loading QEMU firmware config module Jul 2 00:12:59.238998 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:12:59.249068 ignition[691]: op(1): [finished] loading QEMU firmware config module Jul 2 00:12:59.251243 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:12:59.260407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:12:59.284934 systemd-networkd[776]: lo: Link UP Jul 2 00:12:59.284947 systemd-networkd[776]: lo: Gained carrier Jul 2 00:12:59.286980 systemd-networkd[776]: Enumeration completed Jul 2 00:12:59.287134 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:12:59.287507 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:12:59.287513 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:12:59.288477 systemd-networkd[776]: eth0: Link UP Jul 2 00:12:59.288482 systemd-networkd[776]: eth0: Gained carrier Jul 2 00:12:59.288491 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:12:59.293081 systemd[1]: Reached target network.target - Network. Jul 2 00:12:59.306276 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:12:59.309209 ignition[691]: parsing config with SHA512: f8ad023572cb6f00d6deb6ebfcd931de39e6bbc890cf399b4a3385fd5f75a4595564b2a03f83e4d1c2c3f2fbfc63af798f6df661bdd2c6b7667569efa85f0baf Jul 2 00:12:59.313155 unknown[691]: fetched base config from "system" Jul 2 00:12:59.313167 unknown[691]: fetched user config from "qemu" Jul 2 00:12:59.315414 ignition[691]: fetch-offline: fetch-offline passed Jul 2 00:12:59.316425 ignition[691]: Ignition finished successfully Jul 2 00:12:59.318918 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:12:59.320374 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:12:59.328353 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:12:59.343081 ignition[780]: Ignition 2.18.0 Jul 2 00:12:59.343093 ignition[780]: Stage: kargs Jul 2 00:12:59.343310 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:12:59.343324 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:12:59.344431 ignition[780]: kargs: kargs passed Jul 2 00:12:59.348950 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:12:59.344493 ignition[780]: Ignition finished successfully Jul 2 00:12:59.358377 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:12:59.370150 ignition[789]: Ignition 2.18.0 Jul 2 00:12:59.370162 ignition[789]: Stage: disks Jul 2 00:12:59.370333 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:12:59.370345 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:12:59.373492 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:12:59.371200 ignition[789]: disks: disks passed Jul 2 00:12:59.375158 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:12:59.371263 ignition[789]: Ignition finished successfully Jul 2 00:12:59.377897 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:12:59.380950 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:12:59.382367 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:12:59.382439 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:12:59.394366 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:12:59.408202 systemd-resolved[224]: Detected conflict on linux IN A 10.0.0.45 Jul 2 00:12:59.408220 systemd-resolved[224]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jul 2 00:12:59.412090 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:12:59.456056 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:12:59.474410 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:12:59.582314 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:12:59.582570 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:12:59.584367 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:12:59.599312 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:12:59.601290 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:12:59.610168 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Jul 2 00:12:59.610200 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:12:59.610217 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:12:59.610247 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:12:59.603782 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:12:59.613804 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:12:59.603821 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:12:59.603843 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:12:59.614727 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:12:59.618942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:12:59.622673 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:12:59.664511 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:12:59.669098 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:12:59.673138 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:12:59.677258 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:12:59.772313 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:12:59.780307 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:12:59.782426 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:12:59.789250 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:12:59.811212 ignition[922]: INFO : Ignition 2.18.0 Jul 2 00:12:59.811212 ignition[922]: INFO : Stage: mount Jul 2 00:12:59.813948 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:12:59.813948 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:12:59.813948 ignition[922]: INFO : mount: mount passed Jul 2 00:12:59.813948 ignition[922]: INFO : Ignition finished successfully Jul 2 00:12:59.812026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:12:59.814847 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:12:59.821398 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:13:00.105203 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:13:00.114508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:13:00.121249 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (936) Jul 2 00:13:00.123420 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:13:00.123480 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:13:00.123498 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:13:00.127272 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:13:00.128547 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:13:00.158896 ignition[953]: INFO : Ignition 2.18.0 Jul 2 00:13:00.158896 ignition[953]: INFO : Stage: files Jul 2 00:13:00.160928 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:13:00.160928 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:13:00.164674 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:13:00.166890 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:13:00.166890 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:13:00.172282 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:13:00.173832 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:13:00.173832 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:13:00.172939 unknown[953]: wrote ssh authorized keys file for user: core Jul 2 00:13:00.178010 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:13:00.178010 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:13:00.178010 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:13:00.178010 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:13:00.208404 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:13:00.271390 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:13:00.291370 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:13:00.291370 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:13:00.588461 systemd-networkd[776]: eth0: Gained IPv6LL Jul 2 00:13:00.632403 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 00:13:00.714697 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:13:00.714697 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:13:00.719276 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:13:01.096999 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 2 00:13:01.487457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:13:01.487457 ignition[953]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 2 00:13:01.491559 ignition[953]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:13:01.518245 ignition[953]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:13:01.523320 ignition[953]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:13:01.524982 ignition[953]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:13:01.524982 ignition[953]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:13:01.524982 ignition[953]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:13:01.524982 ignition[953]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:13:01.524982 ignition[953]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:13:01.524982 ignition[953]: INFO : files: files passed Jul 2 00:13:01.524982 ignition[953]: INFO : Ignition finished successfully Jul 2 00:13:01.527346 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:13:01.539394 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:13:01.541437 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:13:01.543672 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:13:01.543801 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:13:01.554992 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:13:01.558474 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:13:01.558474 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:13:01.579735 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:13:01.561727 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:13:01.580554 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:13:01.592443 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:13:01.623888 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:13:01.624024 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:13:01.626727 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:13:01.628925 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:13:01.631047 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:13:01.643462 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:13:01.657924 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:13:01.661544 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:13:01.678209 systemd[1]: Stopped target network.target - Network. Jul 2 00:13:01.678450 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:13:01.678873 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:13:01.735160 ignition[1007]: INFO : Ignition 2.18.0 Jul 2 00:13:01.735160 ignition[1007]: INFO : Stage: umount Jul 2 00:13:01.735160 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:13:01.735160 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:13:01.735160 ignition[1007]: INFO : umount: umount passed Jul 2 00:13:01.735160 ignition[1007]: INFO : Ignition finished successfully Jul 2 00:13:01.679313 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:13:01.679661 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:13:01.679838 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:13:01.680451 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:13:01.680879 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:13:01.681303 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:13:01.681622 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:13:01.681997 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:13:01.682602 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:13:01.683015 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:13:01.683613 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:13:01.684013 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:13:01.684604 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:13:01.684971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:13:01.685119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:13:01.686193 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:13:01.686606 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:13:01.686958 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:13:01.687140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:13:01.687560 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:13:01.687716 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:13:01.688557 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:13:01.688717 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:13:01.689308 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:13:01.689778 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:13:01.691382 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:13:01.691885 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:13:01.692616 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:13:01.693027 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:13:01.693174 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:13:01.693659 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:13:01.693798 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:13:01.694290 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:13:01.694461 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:13:01.694877 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:13:01.695019 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:13:01.696706 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:13:01.697120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:13:01.697279 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:13:01.698478 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:13:01.698931 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:13:01.699676 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:13:01.699938 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:13:01.700043 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:13:01.700426 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:13:01.700520 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:13:01.706683 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:13:01.706831 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:13:01.719975 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:13:01.720144 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:13:01.720778 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:13:01.720844 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:13:01.721528 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:13:01.721587 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:13:01.721922 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:13:01.721979 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:13:01.722500 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:13:01.722561 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:13:01.728812 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:13:01.728973 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:13:01.732646 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:13:01.732746 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:13:01.736666 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:13:01.742778 systemd-networkd[776]: eth0: DHCPv6 lease lost Jul 2 00:13:01.745458 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:13:01.745590 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:13:01.747585 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:13:01.747627 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:13:01.755451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:13:01.757805 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:13:01.757927 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:13:01.760143 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:13:01.760196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:13:01.762530 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:13:01.762613 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:13:01.764734 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:13:01.784376 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:13:01.784511 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:13:01.792876 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:13:01.793058 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:13:01.794623 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:13:01.794673 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:13:01.796447 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:13:01.796487 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:13:01.798432 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:13:01.798482 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:13:01.800557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:13:01.800603 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:13:01.802728 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:13:01.802777 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:13:01.812504 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:13:01.813896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:13:01.813965 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:13:01.816447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:13:01.816500 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:13:01.821769 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:13:01.821891 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:13:01.976538 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:13:01.976744 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:13:01.978217 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:13:01.980147 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:13:01.980217 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:13:01.994499 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:13:02.004380 systemd[1]: Switching root. Jul 2 00:13:02.033166 systemd-journald[193]: Journal stopped Jul 2 00:13:03.326806 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 2 00:13:03.326875 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:13:03.326889 kernel: SELinux: policy capability open_perms=1 Jul 2 00:13:03.326901 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:13:03.326918 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:13:03.326931 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:13:03.326943 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:13:03.326957 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:13:03.326968 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:13:03.326982 kernel: audit: type=1403 audit(1719879182.539:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:13:03.326994 systemd[1]: Successfully loaded SELinux policy in 39.980ms. Jul 2 00:13:03.327014 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.700ms. Jul 2 00:13:03.327027 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:13:03.327045 systemd[1]: Detected virtualization kvm. Jul 2 00:13:03.327057 systemd[1]: Detected architecture x86-64. Jul 2 00:13:03.327069 systemd[1]: Detected first boot. Jul 2 00:13:03.327085 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:13:03.327097 zram_generator::config[1068]: No configuration found. Jul 2 00:13:03.327112 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:13:03.327125 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:13:03.327137 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:13:03.327152 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:13:03.327180 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:13:03.327197 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:13:03.327209 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:13:03.327221 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:13:03.327253 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:13:03.327265 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:13:03.327277 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:13:03.327289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:13:03.327301 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:13:03.327313 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:13:03.327325 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:13:03.327338 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:13:03.327350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:13:03.327364 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:13:03.327376 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:13:03.327388 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:13:03.327405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:13:03.327417 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:13:03.327430 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:13:03.327442 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:13:03.327453 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:13:03.327468 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:13:03.327480 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:13:03.327492 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:13:03.327504 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:13:03.327517 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:13:03.327529 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:13:03.327541 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:13:03.327563 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:13:03.327585 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:13:03.327611 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:13:03.327623 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:13:03.327635 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:13:03.327647 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:13:03.327670 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:13:03.327681 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:13:03.327693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:13:03.327705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:13:03.327717 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:13:03.327732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:13:03.327744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:13:03.327756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:13:03.327768 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:13:03.327780 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:13:03.327793 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:13:03.327806 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:13:03.327820 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:13:03.327834 kernel: loop: module loaded Jul 2 00:13:03.327846 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:13:03.327858 kernel: fuse: init (API version 7.39) Jul 2 00:13:03.327870 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:13:03.327881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:13:03.327893 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:13:03.327924 systemd-journald[1159]: Collecting audit messages is disabled. Jul 2 00:13:03.327953 kernel: ACPI: bus type drm_connector registered Jul 2 00:13:03.327966 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:13:03.327979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:13:03.327991 systemd-journald[1159]: Journal started Jul 2 00:13:03.328013 systemd-journald[1159]: Runtime Journal (/run/log/journal/101bdeb0d1dd4465b40d16380a2df770) is 6.0M, max 48.4M, 42.3M free. Jul 2 00:13:03.333273 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:13:03.336096 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:13:03.337629 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:13:03.338900 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:13:03.340082 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:13:03.341344 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:13:03.342598 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:13:03.343991 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:13:03.345629 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:13:03.347233 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:13:03.347462 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:13:03.349025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:13:03.349245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:13:03.350749 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:13:03.350953 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:13:03.352870 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:13:03.353074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:13:03.354638 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:13:03.354846 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:13:03.356396 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:13:03.356624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:13:03.358239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:13:03.360067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:13:03.361917 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:13:03.376918 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:13:03.386445 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:13:03.389001 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:13:03.390432 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:13:03.394427 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:13:03.400409 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:13:03.402821 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:13:03.407021 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:13:03.408468 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:13:03.410904 systemd-journald[1159]: Time spent on flushing to /var/log/journal/101bdeb0d1dd4465b40d16380a2df770 is 16.392ms for 937 entries. Jul 2 00:13:03.410904 systemd-journald[1159]: System Journal (/var/log/journal/101bdeb0d1dd4465b40d16380a2df770) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:13:03.436559 systemd-journald[1159]: Received client request to flush runtime journal. Jul 2 00:13:03.413002 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:13:03.416498 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:13:03.425466 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:13:03.427177 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:13:03.440534 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:13:03.444214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:13:03.446539 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:13:03.452998 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:13:03.465470 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:13:03.467635 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:13:03.468115 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jul 2 00:13:03.468136 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jul 2 00:13:03.476946 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:13:03.488499 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:13:03.489982 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:13:03.517986 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:13:03.529383 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:13:03.548891 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 2 00:13:03.548921 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 2 00:13:03.556387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:13:04.065832 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:13:04.078564 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:13:04.107759 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jul 2 00:13:04.123833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:13:04.138937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:13:04.154651 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:13:04.161156 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 2 00:13:04.174392 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1240) Jul 2 00:13:04.178251 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1242) Jul 2 00:13:04.212264 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:13:04.213703 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:13:04.217246 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:13:04.237194 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:13:04.280248 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 00:13:04.289443 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:13:04.307254 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:13:04.310079 systemd-networkd[1245]: lo: Link UP Jul 2 00:13:04.310093 systemd-networkd[1245]: lo: Gained carrier Jul 2 00:13:04.311683 systemd-networkd[1245]: Enumeration completed Jul 2 00:13:04.311787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:13:04.314798 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:13:04.314814 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:13:04.317298 systemd-networkd[1245]: eth0: Link UP Jul 2 00:13:04.317310 systemd-networkd[1245]: eth0: Gained carrier Jul 2 00:13:04.317323 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:13:04.328432 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:13:04.336719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:13:04.377503 systemd-networkd[1245]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:13:04.408713 kernel: kvm_amd: TSC scaling supported Jul 2 00:13:04.408804 kernel: kvm_amd: Nested Virtualization enabled Jul 2 00:13:04.408823 kernel: kvm_amd: Nested Paging enabled Jul 2 00:13:04.409366 kernel: kvm_amd: LBR virtualization supported Jul 2 00:13:04.410769 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 2 00:13:04.410799 kernel: kvm_amd: Virtual GIF supported Jul 2 00:13:04.434271 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:13:04.464890 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:13:04.482408 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:13:04.484012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:13:04.493725 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:13:04.527682 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:13:04.529278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:13:04.542353 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:13:04.546374 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:13:04.580966 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:13:04.582570 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:13:04.583880 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:13:04.583908 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:13:04.584983 systemd[1]: Reached target machines.target - Containers. Jul 2 00:13:04.587106 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:13:04.603449 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:13:04.606674 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:13:04.608138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:13:04.609764 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:13:04.612822 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:13:04.616873 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:13:04.619672 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:13:04.631538 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:13:04.640698 kernel: loop0: detected capacity change from 0 to 80568 Jul 2 00:13:04.640786 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:13:04.675266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:13:04.705248 kernel: loop1: detected capacity change from 0 to 139904 Jul 2 00:13:04.780264 kernel: loop2: detected capacity change from 0 to 209816 Jul 2 00:13:04.807274 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:13:04.833263 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:13:04.844249 kernel: loop5: detected capacity change from 0 to 209816 Jul 2 00:13:04.849918 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:13:04.850659 (sd-merge)[1313]: Merged extensions into '/usr'. Jul 2 00:13:04.865110 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:13:04.867372 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:13:04.871892 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:13:04.871914 systemd[1]: Reloading... Jul 2 00:13:04.921256 zram_generator::config[1342]: No configuration found. Jul 2 00:13:04.960283 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:13:05.057932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:13:05.127437 systemd[1]: Reloading finished in 255 ms. Jul 2 00:13:05.148758 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:13:05.150586 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:13:05.167601 systemd[1]: Starting ensure-sysext.service... Jul 2 00:13:05.170149 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:13:05.178808 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:13:05.178827 systemd[1]: Reloading... Jul 2 00:13:05.196166 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:13:05.196662 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:13:05.197666 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:13:05.198016 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Jul 2 00:13:05.198116 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Jul 2 00:13:05.201616 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:13:05.201630 systemd-tmpfiles[1393]: Skipping /boot Jul 2 00:13:05.215425 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:13:05.215443 systemd-tmpfiles[1393]: Skipping /boot Jul 2 00:13:05.235260 zram_generator::config[1423]: No configuration found. Jul 2 00:13:05.362114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:13:05.426293 systemd[1]: Reloading finished in 247 ms. Jul 2 00:13:05.444175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:13:05.466052 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:13:05.470180 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:13:05.473661 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:13:05.479398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:13:05.486135 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:13:05.492672 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:13:05.492901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:13:05.496815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:13:05.502159 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:13:05.510006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:13:05.513096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:13:05.514548 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:13:05.515901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:13:05.516165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:13:05.518493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:13:05.518769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:13:05.521000 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:13:05.521323 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:13:05.532923 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:13:05.536348 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:13:05.536738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:13:05.545788 augenrules[1500]: No rules Jul 2 00:13:05.546664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:13:05.551489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:13:05.556474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:13:05.557769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:13:05.562966 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:13:05.564275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:13:05.566002 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:13:05.567961 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:13:05.568618 systemd-resolved[1473]: Positive Trust Anchors: Jul 2 00:13:05.568627 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:13:05.568658 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:13:05.569922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:13:05.570136 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:13:05.571978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:13:05.572306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:13:05.572463 systemd-resolved[1473]: Defaulting to hostname 'linux'. Jul 2 00:13:05.574412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:13:05.574668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:13:05.576267 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:13:05.585865 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:13:05.588733 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:13:05.592267 systemd[1]: Reached target network.target - Network. Jul 2 00:13:05.593439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:13:05.594789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:13:05.594952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:13:05.604375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:13:05.606470 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:13:05.608583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:13:05.611834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:13:05.613049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:13:05.613115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:13:05.613138 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:13:05.613836 systemd[1]: Finished ensure-sysext.service. Jul 2 00:13:05.615271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:13:05.615544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:13:05.617120 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:13:05.617416 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:13:05.620458 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:13:05.620708 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:13:05.624091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:13:05.624413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:13:05.628490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:13:05.628594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:13:05.638441 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:13:05.700283 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:13:06.116557 systemd-resolved[1473]: Clock change detected. Flushing caches. Jul 2 00:13:06.116593 systemd-timesyncd[1537]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:13:06.116628 systemd-timesyncd[1537]: Initial clock synchronization to Tue 2024-07-02 00:13:06.116513 UTC. Jul 2 00:13:06.117588 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:13:06.118756 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:13:06.120033 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:13:06.121323 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:13:06.122610 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:13:06.122642 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:13:06.123554 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:13:06.124827 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:13:06.126276 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:13:06.127545 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:13:06.129128 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:13:06.132176 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:13:06.134345 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:13:06.141228 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:13:06.142347 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:13:06.143333 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:13:06.144471 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:13:06.144510 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:13:06.144531 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:13:06.145793 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:13:06.148067 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:13:06.153092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:13:06.156953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:13:06.158033 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:13:06.159897 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:13:06.163823 jq[1543]: false Jul 2 00:13:06.164403 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:13:06.167929 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:13:06.173609 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:13:06.182786 extend-filesystems[1545]: Found loop3 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found loop4 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found loop5 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found sr0 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found vda Jul 2 00:13:06.182786 extend-filesystems[1545]: Found vda1 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found vda2 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found vda3 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found usr Jul 2 00:13:06.182786 extend-filesystems[1545]: Found vda4 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found vda6 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found vda7 Jul 2 00:13:06.182786 extend-filesystems[1545]: Found vda9 Jul 2 00:13:06.182786 extend-filesystems[1545]: Checking size of /dev/vda9 Jul 2 00:13:06.202019 extend-filesystems[1545]: Resized partition /dev/vda9 Jul 2 00:13:06.185976 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:13:06.184652 dbus-daemon[1542]: [system] SELinux support is enabled Jul 2 00:13:06.206531 extend-filesystems[1569]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:13:06.187728 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:13:06.200963 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:13:06.207936 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:13:06.210514 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:13:06.213995 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:13:06.224229 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1253) Jul 2 00:13:06.221767 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:13:06.223290 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:13:06.223719 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:13:06.224107 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:13:06.227171 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:13:06.227494 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:13:06.231861 update_engine[1564]: I0702 00:13:06.231776 1564 main.cc:92] Flatcar Update Engine starting Jul 2 00:13:06.235235 update_engine[1564]: I0702 00:13:06.235136 1564 update_check_scheduler.cc:74] Next update check in 5m49s Jul 2 00:13:06.237300 jq[1570]: true Jul 2 00:13:06.250828 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:13:06.259527 jq[1577]: true Jul 2 00:13:06.270322 (ntainerd)[1581]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:13:06.278491 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:13:06.278491 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:13:06.278491 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:13:06.283797 extend-filesystems[1545]: Resized filesystem in /dev/vda9 Jul 2 00:13:06.282611 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:13:06.284409 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:13:06.297378 tar[1572]: linux-amd64/helm Jul 2 00:13:06.300082 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:13:06.302069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:13:06.302105 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:13:06.303983 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:13:06.304010 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:13:06.306759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:13:06.314965 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:13:06.327422 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:13:06.327460 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:13:06.330997 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:13:06.330893 systemd-logind[1559]: New seat seat0. Jul 2 00:13:06.333661 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:13:06.335837 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:13:06.340199 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:13:06.364513 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:13:06.453838 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:13:06.484684 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:13:06.495406 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:13:06.506194 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:13:06.506545 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:13:06.516255 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:13:06.527480 containerd[1581]: time="2024-07-02T00:13:06.527359981Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:13:06.533494 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:13:06.544160 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:13:06.547748 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:13:06.550086 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:13:06.555838 containerd[1581]: time="2024-07-02T00:13:06.555773547Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:13:06.555838 containerd[1581]: time="2024-07-02T00:13:06.555838018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:13:06.557572 containerd[1581]: time="2024-07-02T00:13:06.557525073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:13:06.557572 containerd[1581]: time="2024-07-02T00:13:06.557556362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:13:06.557932 containerd[1581]: time="2024-07-02T00:13:06.557906308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:13:06.557932 containerd[1581]: time="2024-07-02T00:13:06.557926326Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:13:06.558043 containerd[1581]: time="2024-07-02T00:13:06.558026424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:13:06.558106 containerd[1581]: time="2024-07-02T00:13:06.558090133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:13:06.558127 containerd[1581]: time="2024-07-02T00:13:06.558105382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:13:06.558212 containerd[1581]: time="2024-07-02T00:13:06.558197314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:13:06.558465 containerd[1581]: time="2024-07-02T00:13:06.558441041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:13:06.558486 containerd[1581]: time="2024-07-02T00:13:06.558464305Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:13:06.558486 containerd[1581]: time="2024-07-02T00:13:06.558474434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:13:06.558665 containerd[1581]: time="2024-07-02T00:13:06.558642439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:13:06.558665 containerd[1581]: time="2024-07-02T00:13:06.558659421Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:13:06.558739 containerd[1581]: time="2024-07-02T00:13:06.558716749Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:13:06.558739 containerd[1581]: time="2024-07-02T00:13:06.558732759Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:13:06.565598 containerd[1581]: time="2024-07-02T00:13:06.565478594Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:13:06.565598 containerd[1581]: time="2024-07-02T00:13:06.565513590Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:13:06.565598 containerd[1581]: time="2024-07-02T00:13:06.565526184Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:13:06.565598 containerd[1581]: time="2024-07-02T00:13:06.565557543Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:13:06.565598 containerd[1581]: time="2024-07-02T00:13:06.565572731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:13:06.565598 containerd[1581]: time="2024-07-02T00:13:06.565583501Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:13:06.565598 containerd[1581]: time="2024-07-02T00:13:06.565595744Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565732871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565747098Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565758720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565774379Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565787374Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565803744Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565834933Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565851905Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565865741Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565879326Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:13:06.565892 containerd[1581]: time="2024-07-02T00:13:06.565891068Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:13:06.566081 containerd[1581]: time="2024-07-02T00:13:06.565902700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:13:06.566081 containerd[1581]: time="2024-07-02T00:13:06.566013298Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:13:06.566417 containerd[1581]: time="2024-07-02T00:13:06.566387930Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:13:06.566457 containerd[1581]: time="2024-07-02T00:13:06.566417927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566457 containerd[1581]: time="2024-07-02T00:13:06.566430701Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:13:06.566501 containerd[1581]: time="2024-07-02T00:13:06.566456860Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:13:06.566522 containerd[1581]: time="2024-07-02T00:13:06.566512274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566541 containerd[1581]: time="2024-07-02T00:13:06.566524797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566561 containerd[1581]: time="2024-07-02T00:13:06.566539515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566561 containerd[1581]: time="2024-07-02T00:13:06.566552189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566604 containerd[1581]: time="2024-07-02T00:13:06.566564341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566604 containerd[1581]: time="2024-07-02T00:13:06.566576244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566604 containerd[1581]: time="2024-07-02T00:13:06.566587625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566604 containerd[1581]: time="2024-07-02T00:13:06.566598596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566676 containerd[1581]: time="2024-07-02T00:13:06.566616690Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:13:06.566789 containerd[1581]: time="2024-07-02T00:13:06.566763896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566789 containerd[1581]: time="2024-07-02T00:13:06.566785917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566865 containerd[1581]: time="2024-07-02T00:13:06.566798431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566865 containerd[1581]: time="2024-07-02T00:13:06.566826032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566865 containerd[1581]: time="2024-07-02T00:13:06.566838345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566865 containerd[1581]: time="2024-07-02T00:13:06.566850468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566865 containerd[1581]: time="2024-07-02T00:13:06.566862481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.566950 containerd[1581]: time="2024-07-02T00:13:06.566874082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:13:06.567227 containerd[1581]: time="2024-07-02T00:13:06.567134261Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:13:06.567227 containerd[1581]: time="2024-07-02T00:13:06.567200896Z" level=info msg="Connect containerd service" Jul 2 00:13:06.567227 containerd[1581]: time="2024-07-02T00:13:06.567230772Z" level=info msg="using legacy CRI server" Jul 2 00:13:06.567227 containerd[1581]: time="2024-07-02T00:13:06.567237384Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:13:06.567493 containerd[1581]: time="2024-07-02T00:13:06.567315501Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:13:06.567882 containerd[1581]: time="2024-07-02T00:13:06.567848841Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:13:06.567935 containerd[1581]: time="2024-07-02T00:13:06.567896611Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:13:06.567935 containerd[1581]: time="2024-07-02T00:13:06.567912020Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:13:06.567935 containerd[1581]: time="2024-07-02T00:13:06.567922069Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:13:06.567935 containerd[1581]: time="2024-07-02T00:13:06.567933750Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568212413Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568215860Z" level=info msg="Start subscribing containerd event" Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568269851Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568328030Z" level=info msg="Start recovering state" Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568436183Z" level=info msg="Start event monitor" Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568469836Z" level=info msg="Start snapshots syncer" Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568483762Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568493420Z" level=info msg="Start streaming server" Jul 2 00:13:06.568751 containerd[1581]: time="2024-07-02T00:13:06.568601233Z" level=info msg="containerd successfully booted in 0.042342s" Jul 2 00:13:06.568750 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:13:06.727914 tar[1572]: linux-amd64/LICENSE Jul 2 00:13:06.728107 tar[1572]: linux-amd64/README.md Jul 2 00:13:06.742309 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:13:06.763991 systemd-networkd[1245]: eth0: Gained IPv6LL Jul 2 00:13:06.767931 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:13:06.769940 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:13:06.782047 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:13:06.784651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:13:06.787086 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:13:06.810424 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:13:06.810772 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:13:06.812396 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:13:06.819195 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:13:07.472896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:13:07.475061 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:13:07.476620 systemd[1]: Startup finished in 6.640s (kernel) + 4.560s (userspace) = 11.200s. Jul 2 00:13:07.498628 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:13:08.029418 kubelet[1683]: E0702 00:13:08.029325 1683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:13:08.033967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:13:08.034302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:13:14.885011 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:13:14.898146 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Jul 2 00:13:14.942854 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:13:14.944919 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:14.953167 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:13:14.962140 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:13:14.963885 systemd-logind[1559]: New session 1 of user core. Jul 2 00:13:14.976489 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:13:14.979025 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:13:14.987221 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:15.099617 systemd[1703]: Queued start job for default target default.target. Jul 2 00:13:15.100134 systemd[1703]: Created slice app.slice - User Application Slice. Jul 2 00:13:15.100165 systemd[1703]: Reached target paths.target - Paths. Jul 2 00:13:15.100182 systemd[1703]: Reached target timers.target - Timers. Jul 2 00:13:15.110919 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:13:15.118432 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:13:15.118545 systemd[1703]: Reached target sockets.target - Sockets. Jul 2 00:13:15.118573 systemd[1703]: Reached target basic.target - Basic System. Jul 2 00:13:15.118645 systemd[1703]: Reached target default.target - Main User Target. Jul 2 00:13:15.118700 systemd[1703]: Startup finished in 123ms. Jul 2 00:13:15.119115 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:13:15.120641 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:13:15.183129 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:45420.service - OpenSSH per-connection server daemon (10.0.0.1:45420). Jul 2 00:13:15.215784 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 45420 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:13:15.217659 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:15.222094 systemd-logind[1559]: New session 2 of user core. Jul 2 00:13:15.237145 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:13:15.292355 sshd[1715]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:15.301256 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:45432.service - OpenSSH per-connection server daemon (10.0.0.1:45432). Jul 2 00:13:15.302138 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:45420.service: Deactivated successfully. Jul 2 00:13:15.304492 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:13:15.305421 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:13:15.307318 systemd-logind[1559]: Removed session 2. Jul 2 00:13:15.337367 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 45432 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:13:15.339108 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:15.344251 systemd-logind[1559]: New session 3 of user core. Jul 2 00:13:15.358208 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:13:15.409106 sshd[1721]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:15.418136 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). Jul 2 00:13:15.418723 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:45432.service: Deactivated successfully. Jul 2 00:13:15.421589 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:13:15.422484 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:13:15.424389 systemd-logind[1559]: Removed session 3. Jul 2 00:13:15.453532 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:13:15.455309 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:15.459788 systemd-logind[1559]: New session 4 of user core. Jul 2 00:13:15.470120 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:13:15.527938 sshd[1728]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:15.537039 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:45440.service - OpenSSH per-connection server daemon (10.0.0.1:45440). Jul 2 00:13:15.537501 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:45434.service: Deactivated successfully. Jul 2 00:13:15.540318 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:13:15.541690 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:13:15.542671 systemd-logind[1559]: Removed session 4. Jul 2 00:13:15.571658 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 45440 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:13:15.573446 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:15.577596 systemd-logind[1559]: New session 5 of user core. Jul 2 00:13:15.590168 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:13:15.650358 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:13:15.650727 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:13:15.674366 sudo[1743]: pam_unix(sudo:session): session closed for user root Jul 2 00:13:15.676757 sshd[1736]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:15.695079 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:45456.service - OpenSSH per-connection server daemon (10.0.0.1:45456). Jul 2 00:13:15.695554 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:45440.service: Deactivated successfully. Jul 2 00:13:15.698352 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:13:15.699069 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:13:15.700664 systemd-logind[1559]: Removed session 5. Jul 2 00:13:15.728581 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 45456 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:13:15.730189 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:15.734275 systemd-logind[1559]: New session 6 of user core. Jul 2 00:13:15.748161 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:13:15.802332 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:13:15.802688 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:13:15.806881 sudo[1753]: pam_unix(sudo:session): session closed for user root Jul 2 00:13:15.814031 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:13:15.814325 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:13:15.835055 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:13:15.836891 auditctl[1756]: No rules Jul 2 00:13:15.837329 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:13:15.837667 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:13:15.841045 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:13:15.873018 augenrules[1775]: No rules Jul 2 00:13:15.874873 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:13:15.876311 sudo[1752]: pam_unix(sudo:session): session closed for user root Jul 2 00:13:15.878389 sshd[1745]: pam_unix(sshd:session): session closed for user core Jul 2 00:13:15.889097 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:45460.service - OpenSSH per-connection server daemon (10.0.0.1:45460). Jul 2 00:13:15.889737 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:45456.service: Deactivated successfully. Jul 2 00:13:15.892005 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:13:15.892914 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:13:15.894495 systemd-logind[1559]: Removed session 6. Jul 2 00:13:15.922790 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 45460 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:13:15.924508 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:13:15.928772 systemd-logind[1559]: New session 7 of user core. Jul 2 00:13:15.940070 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:13:15.992873 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:13:15.993161 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:13:16.098087 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:13:16.098575 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:13:16.347257 dockerd[1799]: time="2024-07-02T00:13:16.347099544Z" level=info msg="Starting up" Jul 2 00:13:17.172404 dockerd[1799]: time="2024-07-02T00:13:17.172348710Z" level=info msg="Loading containers: start." Jul 2 00:13:17.297840 kernel: Initializing XFRM netlink socket Jul 2 00:13:17.381592 systemd-networkd[1245]: docker0: Link UP Jul 2 00:13:17.544976 dockerd[1799]: time="2024-07-02T00:13:17.544852574Z" level=info msg="Loading containers: done." Jul 2 00:13:17.598371 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck263825974-merged.mount: Deactivated successfully. Jul 2 00:13:17.600304 dockerd[1799]: time="2024-07-02T00:13:17.600261074Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:13:17.600479 dockerd[1799]: time="2024-07-02T00:13:17.600456571Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:13:17.600629 dockerd[1799]: time="2024-07-02T00:13:17.600602665Z" level=info msg="Daemon has completed initialization" Jul 2 00:13:17.634619 dockerd[1799]: time="2024-07-02T00:13:17.634556405Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:13:17.634752 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:13:18.160342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:13:18.168977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:13:18.328995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:13:18.333959 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:13:18.340314 containerd[1581]: time="2024-07-02T00:13:18.340268330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:13:18.934787 kubelet[1951]: E0702 00:13:18.934694 1951 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:13:18.943103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:13:18.943429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:13:21.894436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030482222.mount: Deactivated successfully. Jul 2 00:13:27.819960 containerd[1581]: time="2024-07-02T00:13:27.819871372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:27.823167 containerd[1581]: time="2024-07-02T00:13:27.823064814Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jul 2 00:13:27.830593 containerd[1581]: time="2024-07-02T00:13:27.830527524Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:27.894261 containerd[1581]: time="2024-07-02T00:13:27.894183297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:27.895877 containerd[1581]: time="2024-07-02T00:13:27.895821951Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 9.555493708s" Jul 2 00:13:27.895922 containerd[1581]: time="2024-07-02T00:13:27.895887053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 00:13:27.917996 containerd[1581]: time="2024-07-02T00:13:27.917941800Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:13:29.160280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:13:29.174054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:13:29.392409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:13:29.431845 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:13:30.223522 kubelet[2037]: E0702 00:13:30.223440 2037 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:13:30.228798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:13:30.229109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:13:31.522625 containerd[1581]: time="2024-07-02T00:13:31.522524734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:31.570356 containerd[1581]: time="2024-07-02T00:13:31.570254709Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jul 2 00:13:31.590937 containerd[1581]: time="2024-07-02T00:13:31.590874514Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:31.603045 containerd[1581]: time="2024-07-02T00:13:31.602963323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:31.604464 containerd[1581]: time="2024-07-02T00:13:31.604403535Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 3.686423374s" Jul 2 00:13:31.604464 containerd[1581]: time="2024-07-02T00:13:31.604455723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 00:13:31.634027 containerd[1581]: time="2024-07-02T00:13:31.633976756Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:13:34.196588 containerd[1581]: time="2024-07-02T00:13:34.196472341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:34.198231 containerd[1581]: time="2024-07-02T00:13:34.198161159Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jul 2 00:13:34.199690 containerd[1581]: time="2024-07-02T00:13:34.199651906Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:34.203640 containerd[1581]: time="2024-07-02T00:13:34.203601837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:34.204777 containerd[1581]: time="2024-07-02T00:13:34.204728420Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 2.570709214s" Jul 2 00:13:34.204777 containerd[1581]: time="2024-07-02T00:13:34.204766772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 00:13:34.231041 containerd[1581]: time="2024-07-02T00:13:34.231004566Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:13:35.567488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129424623.mount: Deactivated successfully. Jul 2 00:13:35.832564 containerd[1581]: time="2024-07-02T00:13:35.832403705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:35.833347 containerd[1581]: time="2024-07-02T00:13:35.833302511Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 00:13:35.835083 containerd[1581]: time="2024-07-02T00:13:35.835047275Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:35.837123 containerd[1581]: time="2024-07-02T00:13:35.837082312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:35.837597 containerd[1581]: time="2024-07-02T00:13:35.837563375Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.606518473s" Jul 2 00:13:35.837597 containerd[1581]: time="2024-07-02T00:13:35.837593161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 00:13:35.867197 containerd[1581]: time="2024-07-02T00:13:35.867155632Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:13:36.631605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078706929.mount: Deactivated successfully. Jul 2 00:13:36.825314 containerd[1581]: time="2024-07-02T00:13:36.825224247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:36.839991 containerd[1581]: time="2024-07-02T00:13:36.839914908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:13:36.845907 containerd[1581]: time="2024-07-02T00:13:36.845841166Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:36.860790 containerd[1581]: time="2024-07-02T00:13:36.860716373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:36.861419 containerd[1581]: time="2024-07-02T00:13:36.861371883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 994.170134ms" Jul 2 00:13:36.861419 containerd[1581]: time="2024-07-02T00:13:36.861412980Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:13:36.882397 containerd[1581]: time="2024-07-02T00:13:36.882270640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:13:38.057629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796754443.mount: Deactivated successfully. Jul 2 00:13:40.332992 containerd[1581]: time="2024-07-02T00:13:40.332929287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:40.334506 containerd[1581]: time="2024-07-02T00:13:40.334439848Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:13:40.336057 containerd[1581]: time="2024-07-02T00:13:40.336020233Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:40.339418 containerd[1581]: time="2024-07-02T00:13:40.339372951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:40.340956 containerd[1581]: time="2024-07-02T00:13:40.340912919Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.458592254s" Jul 2 00:13:40.340956 containerd[1581]: time="2024-07-02T00:13:40.340947896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:13:40.366663 containerd[1581]: time="2024-07-02T00:13:40.366610680Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:13:40.410249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:13:40.426121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:13:40.575247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:13:40.581365 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:13:40.751246 kubelet[2157]: E0702 00:13:40.751149 2157 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:13:40.756443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:13:40.756844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:13:43.393792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663776760.mount: Deactivated successfully. Jul 2 00:13:44.161213 containerd[1581]: time="2024-07-02T00:13:44.161112641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:44.162862 containerd[1581]: time="2024-07-02T00:13:44.162788693Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jul 2 00:13:44.164241 containerd[1581]: time="2024-07-02T00:13:44.164194508Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:44.167125 containerd[1581]: time="2024-07-02T00:13:44.167055274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:13:44.167975 containerd[1581]: time="2024-07-02T00:13:44.167893054Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 3.801225094s" Jul 2 00:13:44.167975 containerd[1581]: time="2024-07-02T00:13:44.167966625Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 00:13:46.790799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:13:46.802072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:13:46.820749 systemd[1]: Reloading requested from client PID 2251 ('systemctl') (unit session-7.scope)... Jul 2 00:13:46.820765 systemd[1]: Reloading... Jul 2 00:13:46.900066 zram_generator::config[2288]: No configuration found. Jul 2 00:13:47.318579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:13:47.394091 systemd[1]: Reloading finished in 572 ms. Jul 2 00:13:47.454936 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:13:47.458746 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:13:47.470318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:13:47.491034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:13:47.724886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:13:47.740018 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:13:47.846284 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:13:47.846284 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:13:47.848724 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:13:47.848724 kubelet[2347]: I0702 00:13:47.846862 2347 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:13:48.196395 kubelet[2347]: I0702 00:13:48.196347 2347 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:13:48.196395 kubelet[2347]: I0702 00:13:48.196380 2347 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:13:48.199047 kubelet[2347]: I0702 00:13:48.196906 2347 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:13:48.230875 kubelet[2347]: I0702 00:13:48.229875 2347 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:13:48.240080 kubelet[2347]: E0702 00:13:48.239985 2347 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.279111 kubelet[2347]: I0702 00:13:48.279047 2347 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:13:48.279674 kubelet[2347]: I0702 00:13:48.279638 2347 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:13:48.280441 kubelet[2347]: I0702 00:13:48.279910 2347 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:13:48.280441 kubelet[2347]: I0702 00:13:48.279936 2347 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:13:48.280441 kubelet[2347]: I0702 00:13:48.279947 2347 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:13:48.280892 kubelet[2347]: I0702 00:13:48.280844 2347 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:13:48.286213 kubelet[2347]: I0702 00:13:48.286160 2347 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:13:48.286213 kubelet[2347]: I0702 00:13:48.286209 2347 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:13:48.286329 kubelet[2347]: I0702 00:13:48.286250 2347 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:13:48.286329 kubelet[2347]: I0702 00:13:48.286273 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:13:48.289004 kubelet[2347]: I0702 00:13:48.288969 2347 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:13:48.290907 kubelet[2347]: W0702 00:13:48.289712 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.290907 kubelet[2347]: E0702 00:13:48.289773 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.290907 kubelet[2347]: W0702 00:13:48.289854 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.290907 kubelet[2347]: E0702 00:13:48.289884 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.297627 kubelet[2347]: W0702 00:13:48.297573 2347 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:13:48.298854 kubelet[2347]: I0702 00:13:48.298440 2347 server.go:1232] "Started kubelet" Jul 2 00:13:48.299007 kubelet[2347]: I0702 00:13:48.298972 2347 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:13:48.300645 kubelet[2347]: I0702 00:13:48.299373 2347 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:13:48.300645 kubelet[2347]: I0702 00:13:48.299443 2347 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:13:48.302776 kubelet[2347]: I0702 00:13:48.300733 2347 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:13:48.302776 kubelet[2347]: I0702 00:13:48.301441 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:13:48.305988 kubelet[2347]: E0702 00:13:48.305011 2347 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3d0bbb4cbd11", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 13, 48, 298411281, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 13, 48, 298411281, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.45:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.45:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:13:48.306353 kubelet[2347]: I0702 00:13:48.306312 2347 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:13:48.306628 kubelet[2347]: I0702 00:13:48.306602 2347 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:13:48.311593 kubelet[2347]: E0702 00:13:48.309237 2347 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" Jul 2 00:13:48.311593 kubelet[2347]: W0702 00:13:48.309334 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.311593 kubelet[2347]: E0702 00:13:48.309388 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.311593 kubelet[2347]: I0702 00:13:48.310195 2347 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:13:48.315047 kubelet[2347]: E0702 00:13:48.314798 2347 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:13:48.315047 kubelet[2347]: E0702 00:13:48.314848 2347 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:13:48.361329 kubelet[2347]: I0702 00:13:48.361289 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:13:48.363237 kubelet[2347]: I0702 00:13:48.363211 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:13:48.363417 kubelet[2347]: I0702 00:13:48.363402 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:13:48.363457 kubelet[2347]: I0702 00:13:48.363426 2347 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:13:48.363502 kubelet[2347]: E0702 00:13:48.363473 2347 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:13:48.364359 kubelet[2347]: W0702 00:13:48.364311 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.364406 kubelet[2347]: E0702 00:13:48.364368 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:48.381433 kubelet[2347]: I0702 00:13:48.381397 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:13:48.381433 kubelet[2347]: I0702 00:13:48.381422 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:13:48.381433 kubelet[2347]: I0702 00:13:48.381448 2347 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:13:48.408016 kubelet[2347]: I0702 00:13:48.407974 2347 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:13:48.408411 kubelet[2347]: E0702 00:13:48.408376 2347 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 2 00:13:48.463906 kubelet[2347]: E0702 00:13:48.463749 2347 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:13:48.510630 kubelet[2347]: E0702 00:13:48.510599 2347 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" Jul 2 00:13:48.610930 kubelet[2347]: I0702 00:13:48.610897 2347 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:13:48.611325 kubelet[2347]: E0702 00:13:48.611306 2347 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 2 00:13:48.664616 kubelet[2347]: E0702 00:13:48.664508 2347 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:13:48.793322 kubelet[2347]: I0702 00:13:48.793115 2347 policy_none.go:49] "None policy: Start" Jul 2 00:13:48.794095 kubelet[2347]: I0702 00:13:48.794047 2347 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:13:48.794095 kubelet[2347]: I0702 00:13:48.794090 2347 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:13:48.803201 kubelet[2347]: I0702 00:13:48.801901 2347 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:13:48.803201 kubelet[2347]: I0702 00:13:48.802250 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:13:48.803368 kubelet[2347]: E0702 00:13:48.803342 2347 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:13:48.911430 kubelet[2347]: E0702 00:13:48.911389 2347 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" Jul 2 00:13:49.013238 kubelet[2347]: I0702 00:13:49.013200 2347 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:13:49.013754 kubelet[2347]: E0702 00:13:49.013704 2347 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 2 00:13:49.064923 kubelet[2347]: I0702 00:13:49.064745 2347 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:13:49.066176 kubelet[2347]: I0702 00:13:49.066124 2347 topology_manager.go:215] "Topology Admit Handler" podUID="b123ac1c4037864d788343167b5046c5" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:13:49.066957 kubelet[2347]: I0702 00:13:49.066934 2347 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:13:49.188118 kubelet[2347]: W0702 00:13:49.188053 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:49.188118 kubelet[2347]: E0702 00:13:49.188113 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:49.210612 kubelet[2347]: I0702 00:13:49.210528 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b123ac1c4037864d788343167b5046c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b123ac1c4037864d788343167b5046c5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:13:49.210612 kubelet[2347]: I0702 00:13:49.210608 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b123ac1c4037864d788343167b5046c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b123ac1c4037864d788343167b5046c5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:13:49.210783 kubelet[2347]: I0702 00:13:49.210643 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:13:49.210783 kubelet[2347]: I0702 00:13:49.210671 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:13:49.210783 kubelet[2347]: I0702 00:13:49.210694 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:13:49.210783 kubelet[2347]: I0702 00:13:49.210717 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b123ac1c4037864d788343167b5046c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b123ac1c4037864d788343167b5046c5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:13:49.210783 kubelet[2347]: I0702 00:13:49.210772 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:13:49.210951 kubelet[2347]: I0702 00:13:49.210827 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:13:49.210951 kubelet[2347]: I0702 00:13:49.210857 2347 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:13:49.247289 kubelet[2347]: W0702 00:13:49.247213 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:49.247289 kubelet[2347]: E0702 00:13:49.247292 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:49.375753 kubelet[2347]: E0702 00:13:49.375611 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:49.376135 kubelet[2347]: E0702 00:13:49.376104 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:49.377280 containerd[1581]: time="2024-07-02T00:13:49.376582240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:49.377734 containerd[1581]: time="2024-07-02T00:13:49.376704262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:49.380123 kubelet[2347]: E0702 00:13:49.380076 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:49.380690 containerd[1581]: time="2024-07-02T00:13:49.380649020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b123ac1c4037864d788343167b5046c5,Namespace:kube-system,Attempt:0,}" Jul 2 00:13:49.420053 kubelet[2347]: W0702 00:13:49.419920 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:49.420053 kubelet[2347]: E0702 00:13:49.420002 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:49.712498 kubelet[2347]: E0702 00:13:49.712448 2347 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" Jul 2 00:13:49.789443 kubelet[2347]: W0702 00:13:49.789356 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:49.789443 kubelet[2347]: E0702 00:13:49.789427 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:49.815615 kubelet[2347]: I0702 00:13:49.815567 2347 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:13:49.815972 kubelet[2347]: E0702 00:13:49.815948 2347 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 2 00:13:50.338316 kubelet[2347]: E0702 00:13:50.338266 2347 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:51.286108 update_engine[1564]: I0702 00:13:51.286027 1564 update_attempter.cc:509] Updating boot flags... Jul 2 00:13:51.313716 kubelet[2347]: E0702 00:13:51.313671 2347 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="3.2s" Jul 2 00:13:51.418295 kubelet[2347]: I0702 00:13:51.418255 2347 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:13:51.418760 kubelet[2347]: E0702 00:13:51.418622 2347 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 2 00:13:51.464845 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2388) Jul 2 00:13:51.530848 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2391) Jul 2 00:13:51.575844 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2391) Jul 2 00:13:51.705320 kubelet[2347]: W0702 00:13:51.705273 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:51.705320 kubelet[2347]: E0702 00:13:51.705314 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:52.096127 kubelet[2347]: W0702 00:13:52.095973 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:52.096127 kubelet[2347]: E0702 00:13:52.096016 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:52.204482 kubelet[2347]: W0702 00:13:52.204419 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:52.204482 kubelet[2347]: E0702 00:13:52.204471 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:52.895616 kubelet[2347]: W0702 00:13:52.895567 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:52.895616 kubelet[2347]: E0702 00:13:52.895610 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:53.379796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273434657.mount: Deactivated successfully. Jul 2 00:13:54.222205 containerd[1581]: time="2024-07-02T00:13:54.222101416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:13:54.329538 containerd[1581]: time="2024-07-02T00:13:54.329483783Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:13:54.429946 containerd[1581]: time="2024-07-02T00:13:54.429835725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:13:54.514647 kubelet[2347]: E0702 00:13:54.514532 2347 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="6.4s" Jul 2 00:13:54.527404 kubelet[2347]: E0702 00:13:54.527364 2347 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:54.579113 containerd[1581]: time="2024-07-02T00:13:54.579014290Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:13:54.620010 kubelet[2347]: I0702 00:13:54.619987 2347 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:13:54.620375 kubelet[2347]: E0702 00:13:54.620349 2347 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 2 00:13:54.744431 containerd[1581]: time="2024-07-02T00:13:54.744361896Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:13:54.819892 containerd[1581]: time="2024-07-02T00:13:54.819702475Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:13:54.921968 containerd[1581]: time="2024-07-02T00:13:54.921872251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:13:55.113560 containerd[1581]: time="2024-07-02T00:13:55.113366709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:13:55.114481 containerd[1581]: time="2024-07-02T00:13:55.114435653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.73684694s" Jul 2 00:13:55.115343 containerd[1581]: time="2024-07-02T00:13:55.115298014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.737915423s" Jul 2 00:13:55.414075 containerd[1581]: time="2024-07-02T00:13:55.414018763Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.03325785s" Jul 2 00:13:55.702146 kubelet[2347]: W0702 00:13:55.702017 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:55.702146 kubelet[2347]: E0702 00:13:55.702070 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:55.925979 kubelet[2347]: W0702 00:13:55.925930 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:55.925979 kubelet[2347]: E0702 00:13:55.925972 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:56.406412 kubelet[2347]: W0702 00:13:56.406360 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:56.406412 kubelet[2347]: E0702 00:13:56.406406 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:56.831442 kubelet[2347]: E0702 00:13:56.831339 2347 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3d0bbb4cbd11", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 13, 48, 298411281, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 13, 48, 298411281, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.45:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.45:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:13:56.850364 containerd[1581]: time="2024-07-02T00:13:56.850264688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:56.850364 containerd[1581]: time="2024-07-02T00:13:56.850337205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:56.850364 containerd[1581]: time="2024-07-02T00:13:56.850353986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:56.850878 containerd[1581]: time="2024-07-02T00:13:56.850365028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:56.924837 containerd[1581]: time="2024-07-02T00:13:56.924488611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7897a7ccb0b73a5ee82bf2878db624f5e42c2c5ea7d58f6496cfb55a033b6980\"" Jul 2 00:13:56.925524 kubelet[2347]: E0702 00:13:56.925493 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:56.929883 containerd[1581]: time="2024-07-02T00:13:56.929831218Z" level=info msg="CreateContainer within sandbox \"7897a7ccb0b73a5ee82bf2878db624f5e42c2c5ea7d58f6496cfb55a033b6980\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:13:56.934824 containerd[1581]: time="2024-07-02T00:13:56.934362260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:56.934824 containerd[1581]: time="2024-07-02T00:13:56.934451509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:56.934824 containerd[1581]: time="2024-07-02T00:13:56.934467630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:56.934824 containerd[1581]: time="2024-07-02T00:13:56.934483960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:57.013441 containerd[1581]: time="2024-07-02T00:13:57.013400138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"4db18e07255e42a59306679eac2874e837a4716b9c058db1801e252cbed9795c\"" Jul 2 00:13:57.014558 kubelet[2347]: E0702 00:13:57.014516 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:57.016573 containerd[1581]: time="2024-07-02T00:13:57.016513766Z" level=info msg="CreateContainer within sandbox \"4db18e07255e42a59306679eac2874e837a4716b9c058db1801e252cbed9795c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:13:57.212352 containerd[1581]: time="2024-07-02T00:13:57.212246254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:13:57.212352 containerd[1581]: time="2024-07-02T00:13:57.212312129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:57.212352 containerd[1581]: time="2024-07-02T00:13:57.212330704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:13:57.212352 containerd[1581]: time="2024-07-02T00:13:57.212343738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:13:57.283302 containerd[1581]: time="2024-07-02T00:13:57.283255575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b123ac1c4037864d788343167b5046c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb2e27ed6f48b1f07dd22bf72ce9b53bb02798cc74b6ee09d2ca1adf222648dd\"" Jul 2 00:13:57.284076 kubelet[2347]: E0702 00:13:57.284056 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:57.286286 containerd[1581]: time="2024-07-02T00:13:57.286255819Z" level=info msg="CreateContainer within sandbox \"bb2e27ed6f48b1f07dd22bf72ce9b53bb02798cc74b6ee09d2ca1adf222648dd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:13:57.429061 kubelet[2347]: W0702 00:13:57.429010 2347 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:57.429061 kubelet[2347]: E0702 00:13:57.429059 2347 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 2 00:13:58.803942 kubelet[2347]: E0702 00:13:58.803889 2347 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:13:58.964952 containerd[1581]: time="2024-07-02T00:13:58.964884384Z" level=info msg="CreateContainer within sandbox \"7897a7ccb0b73a5ee82bf2878db624f5e42c2c5ea7d58f6496cfb55a033b6980\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8184324473481bf28269155d80a5ad010c5ed736d8d39a5872aaf0e58d3e9c1b\"" Jul 2 00:13:58.965724 containerd[1581]: time="2024-07-02T00:13:58.965671853Z" level=info msg="StartContainer for \"8184324473481bf28269155d80a5ad010c5ed736d8d39a5872aaf0e58d3e9c1b\"" Jul 2 00:13:59.258531 containerd[1581]: time="2024-07-02T00:13:59.258458004Z" level=info msg="StartContainer for \"8184324473481bf28269155d80a5ad010c5ed736d8d39a5872aaf0e58d3e9c1b\" returns successfully" Jul 2 00:13:59.388382 kubelet[2347]: E0702 00:13:59.388359 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:13:59.675634 containerd[1581]: time="2024-07-02T00:13:59.675579478Z" level=info msg="CreateContainer within sandbox \"4db18e07255e42a59306679eac2874e837a4716b9c058db1801e252cbed9795c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7140a1769eed58f306238f840e6b69cd3148cb413e58f9912ee760972a19ad58\"" Jul 2 00:13:59.677195 containerd[1581]: time="2024-07-02T00:13:59.676093508Z" level=info msg="StartContainer for \"7140a1769eed58f306238f840e6b69cd3148cb413e58f9912ee760972a19ad58\"" Jul 2 00:14:00.009398 containerd[1581]: time="2024-07-02T00:14:00.009253583Z" level=info msg="StartContainer for \"7140a1769eed58f306238f840e6b69cd3148cb413e58f9912ee760972a19ad58\" returns successfully" Jul 2 00:14:00.009398 containerd[1581]: time="2024-07-02T00:14:00.009261508Z" level=info msg="CreateContainer within sandbox \"bb2e27ed6f48b1f07dd22bf72ce9b53bb02798cc74b6ee09d2ca1adf222648dd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ded438526775e89974759517d1a680e6180243ac2d961451f228bfce0a3f6e47\"" Jul 2 00:14:00.010169 containerd[1581]: time="2024-07-02T00:14:00.010118587Z" level=info msg="StartContainer for \"ded438526775e89974759517d1a680e6180243ac2d961451f228bfce0a3f6e47\"" Jul 2 00:14:00.267146 containerd[1581]: time="2024-07-02T00:14:00.267015170Z" level=info msg="StartContainer for \"ded438526775e89974759517d1a680e6180243ac2d961451f228bfce0a3f6e47\" returns successfully" Jul 2 00:14:00.392764 kubelet[2347]: E0702 00:14:00.392728 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:00.394724 kubelet[2347]: E0702 00:14:00.394701 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:00.395222 kubelet[2347]: E0702 00:14:00.395201 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:01.022694 kubelet[2347]: I0702 00:14:01.022646 2347 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:14:01.361824 kubelet[2347]: E0702 00:14:01.361663 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:14:01.397214 kubelet[2347]: E0702 00:14:01.397179 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:01.398193 kubelet[2347]: E0702 00:14:01.398156 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:01.539317 kubelet[2347]: I0702 00:14:01.539256 2347 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:14:01.841217 kubelet[2347]: E0702 00:14:01.841167 2347 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:14:01.941355 kubelet[2347]: E0702 00:14:01.941291 2347 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:14:02.041534 kubelet[2347]: E0702 00:14:02.041458 2347 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:14:02.142416 kubelet[2347]: E0702 00:14:02.142272 2347 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:14:02.242759 kubelet[2347]: E0702 00:14:02.242696 2347 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:14:02.343147 kubelet[2347]: E0702 00:14:02.343071 2347 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:14:02.398960 kubelet[2347]: E0702 00:14:02.398804 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:02.443751 kubelet[2347]: E0702 00:14:02.443680 2347 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:14:02.544587 kubelet[2347]: E0702 00:14:02.544494 2347 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:14:03.296101 kubelet[2347]: I0702 00:14:03.296029 2347 apiserver.go:52] "Watching apiserver" Jul 2 00:14:03.311378 kubelet[2347]: I0702 00:14:03.311327 2347 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:14:04.469637 kubelet[2347]: E0702 00:14:04.469569 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:05.078996 kubelet[2347]: E0702 00:14:05.078966 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:05.403214 kubelet[2347]: E0702 00:14:05.403089 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:05.403214 kubelet[2347]: E0702 00:14:05.403202 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:07.683944 systemd[1]: Reloading requested from client PID 2641 ('systemctl') (unit session-7.scope)... Jul 2 00:14:07.683964 systemd[1]: Reloading... Jul 2 00:14:07.760844 zram_generator::config[2678]: No configuration found. Jul 2 00:14:07.881519 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:14:07.960990 systemd[1]: Reloading finished in 276 ms. Jul 2 00:14:07.993417 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:14:07.993747 kubelet[2347]: I0702 00:14:07.993658 2347 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:14:08.006175 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:14:08.006637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:14:08.013988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:14:08.227487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:14:08.232494 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:14:08.276080 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:14:08.276080 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:14:08.276080 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:14:08.276517 kubelet[2733]: I0702 00:14:08.276135 2733 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:14:08.281314 kubelet[2733]: I0702 00:14:08.281272 2733 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:14:08.281314 kubelet[2733]: I0702 00:14:08.281296 2733 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:14:08.281495 kubelet[2733]: I0702 00:14:08.281480 2733 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:14:08.282818 kubelet[2733]: I0702 00:14:08.282769 2733 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:14:08.283910 kubelet[2733]: I0702 00:14:08.283886 2733 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:14:08.291895 kubelet[2733]: I0702 00:14:08.291847 2733 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:14:08.292461 kubelet[2733]: I0702 00:14:08.292419 2733 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:14:08.292595 kubelet[2733]: I0702 00:14:08.292570 2733 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:14:08.292595 kubelet[2733]: I0702 00:14:08.292590 2733 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:14:08.292595 kubelet[2733]: I0702 00:14:08.292599 2733 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:14:08.292853 kubelet[2733]: I0702 00:14:08.292646 2733 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:14:08.292853 kubelet[2733]: I0702 00:14:08.292748 2733 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:14:08.292853 kubelet[2733]: I0702 00:14:08.292761 2733 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:14:08.292853 kubelet[2733]: I0702 00:14:08.292787 2733 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:14:08.292853 kubelet[2733]: I0702 00:14:08.292803 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:14:08.294157 kubelet[2733]: I0702 00:14:08.294122 2733 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:14:08.294966 kubelet[2733]: I0702 00:14:08.294932 2733 server.go:1232] "Started kubelet" Jul 2 00:14:08.295374 kubelet[2733]: I0702 00:14:08.295357 2733 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:14:08.296108 kubelet[2733]: I0702 00:14:08.296081 2733 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:14:08.297225 kubelet[2733]: I0702 00:14:08.296196 2733 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:14:08.297225 kubelet[2733]: I0702 00:14:08.296781 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:14:08.297590 kubelet[2733]: I0702 00:14:08.297567 2733 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:14:08.305709 kubelet[2733]: E0702 00:14:08.305672 2733 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:14:08.305709 kubelet[2733]: E0702 00:14:08.305698 2733 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:14:08.307321 kubelet[2733]: I0702 00:14:08.307186 2733 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:14:08.307837 kubelet[2733]: I0702 00:14:08.307797 2733 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:14:08.309112 kubelet[2733]: I0702 00:14:08.309056 2733 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:14:08.319485 kubelet[2733]: I0702 00:14:08.319380 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:14:08.320871 kubelet[2733]: I0702 00:14:08.320856 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:14:08.321249 kubelet[2733]: I0702 00:14:08.320936 2733 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:14:08.321249 kubelet[2733]: I0702 00:14:08.320961 2733 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:14:08.321249 kubelet[2733]: E0702 00:14:08.321016 2733 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:14:08.397803 kubelet[2733]: I0702 00:14:08.397773 2733 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:14:08.397803 kubelet[2733]: I0702 00:14:08.397793 2733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:14:08.397970 kubelet[2733]: I0702 00:14:08.397843 2733 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:14:08.397992 kubelet[2733]: I0702 00:14:08.397986 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:14:08.398012 kubelet[2733]: I0702 00:14:08.398004 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:14:08.398012 kubelet[2733]: I0702 00:14:08.398010 2733 policy_none.go:49] "None policy: Start" Jul 2 00:14:08.398438 kubelet[2733]: I0702 00:14:08.398421 2733 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:14:08.398491 kubelet[2733]: I0702 00:14:08.398443 2733 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:14:08.398592 kubelet[2733]: I0702 00:14:08.398582 2733 state_mem.go:75] "Updated machine memory state" Jul 2 00:14:08.400106 kubelet[2733]: I0702 00:14:08.400081 2733 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:14:08.400416 kubelet[2733]: I0702 00:14:08.400308 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:14:08.414054 kubelet[2733]: I0702 00:14:08.414019 2733 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:14:08.421302 kubelet[2733]: I0702 00:14:08.421257 2733 topology_manager.go:215] "Topology Admit Handler" podUID="b123ac1c4037864d788343167b5046c5" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:14:08.421452 kubelet[2733]: I0702 00:14:08.421367 2733 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:14:08.421475 kubelet[2733]: I0702 00:14:08.421464 2733 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:14:08.507955 kubelet[2733]: E0702 00:14:08.506339 2733 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:14:08.507955 kubelet[2733]: E0702 00:14:08.506514 2733 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 00:14:08.516173 kubelet[2733]: I0702 00:14:08.516142 2733 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 00:14:08.516270 kubelet[2733]: I0702 00:14:08.516216 2733 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:14:08.527915 sudo[2767]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:14:08.528274 sudo[2767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:14:08.611056 kubelet[2733]: I0702 00:14:08.610997 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:14:08.611056 kubelet[2733]: I0702 00:14:08.611062 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b123ac1c4037864d788343167b5046c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b123ac1c4037864d788343167b5046c5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:14:08.611234 kubelet[2733]: I0702 00:14:08.611118 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b123ac1c4037864d788343167b5046c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b123ac1c4037864d788343167b5046c5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:14:08.611234 kubelet[2733]: I0702 00:14:08.611183 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:14:08.611234 kubelet[2733]: I0702 00:14:08.611205 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:14:08.611321 kubelet[2733]: I0702 00:14:08.611243 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:14:08.611321 kubelet[2733]: I0702 00:14:08.611266 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b123ac1c4037864d788343167b5046c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b123ac1c4037864d788343167b5046c5\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:14:08.611321 kubelet[2733]: I0702 00:14:08.611287 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:14:08.611321 kubelet[2733]: I0702 00:14:08.611310 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:14:08.790539 kubelet[2733]: E0702 00:14:08.790408 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:08.807933 kubelet[2733]: E0702 00:14:08.807893 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:08.808286 kubelet[2733]: E0702 00:14:08.808233 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:09.026503 sudo[2767]: pam_unix(sudo:session): session closed for user root Jul 2 00:14:09.294065 kubelet[2733]: I0702 00:14:09.294012 2733 apiserver.go:52] "Watching apiserver" Jul 2 00:14:09.309563 kubelet[2733]: I0702 00:14:09.309523 2733 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:14:09.335638 kubelet[2733]: E0702 00:14:09.335602 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:09.399683 kubelet[2733]: E0702 00:14:09.399638 2733 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 00:14:09.400749 kubelet[2733]: E0702 00:14:09.400063 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:09.400749 kubelet[2733]: E0702 00:14:09.400088 2733 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:14:09.400749 kubelet[2733]: E0702 00:14:09.400686 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:09.406692 kubelet[2733]: I0702 00:14:09.406641 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.406586695 podCreationTimestamp="2024-07-02 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:14:09.406490353 +0000 UTC m=+1.169848730" watchObservedRunningTime="2024-07-02 00:14:09.406586695 +0000 UTC m=+1.169945072" Jul 2 00:14:09.406851 kubelet[2733]: I0702 00:14:09.406718 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4067050380000001 podCreationTimestamp="2024-07-02 00:14:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:14:09.400001927 +0000 UTC m=+1.163360304" watchObservedRunningTime="2024-07-02 00:14:09.406705038 +0000 UTC m=+1.170063415" Jul 2 00:14:10.337672 kubelet[2733]: E0702 00:14:10.337645 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:10.338108 kubelet[2733]: E0702 00:14:10.337760 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:10.504732 sudo[1788]: pam_unix(sudo:session): session closed for user root Jul 2 00:14:10.509428 sshd[1782]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:10.515025 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:45460.service: Deactivated successfully. Jul 2 00:14:10.517960 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:14:10.518797 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:14:10.519798 systemd-logind[1559]: Removed session 7. Jul 2 00:14:11.777169 kubelet[2733]: E0702 00:14:11.777125 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:11.791726 kubelet[2733]: I0702 00:14:11.791672 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.791613677 podCreationTimestamp="2024-07-02 00:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:14:09.489888163 +0000 UTC m=+1.253246540" watchObservedRunningTime="2024-07-02 00:14:11.791613677 +0000 UTC m=+3.554972054" Jul 2 00:14:12.341116 kubelet[2733]: E0702 00:14:12.341079 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:13.337661 kubelet[2733]: E0702 00:14:13.337606 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:13.342332 kubelet[2733]: E0702 00:14:13.342272 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:16.753464 kubelet[2733]: E0702 00:14:16.753421 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:17.349676 kubelet[2733]: E0702 00:14:17.349642 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:19.277906 kubelet[2733]: I0702 00:14:19.277866 2733 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:14:19.278405 containerd[1581]: time="2024-07-02T00:14:19.278303893Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:14:19.278735 kubelet[2733]: I0702 00:14:19.278491 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:14:19.852119 kubelet[2733]: I0702 00:14:19.851950 2733 topology_manager.go:215] "Topology Admit Handler" podUID="8aef32bc-e690-43c4-b76e-9e03c5399342" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-8xmhs" Jul 2 00:14:19.875118 kubelet[2733]: I0702 00:14:19.875056 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aef32bc-e690-43c4-b76e-9e03c5399342-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-8xmhs\" (UID: \"8aef32bc-e690-43c4-b76e-9e03c5399342\") " pod="kube-system/cilium-operator-6bc8ccdb58-8xmhs" Jul 2 00:14:19.875118 kubelet[2733]: I0702 00:14:19.875124 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh5z9\" (UniqueName: \"kubernetes.io/projected/8aef32bc-e690-43c4-b76e-9e03c5399342-kube-api-access-sh5z9\") pod \"cilium-operator-6bc8ccdb58-8xmhs\" (UID: \"8aef32bc-e690-43c4-b76e-9e03c5399342\") " pod="kube-system/cilium-operator-6bc8ccdb58-8xmhs" Jul 2 00:14:20.182700 kubelet[2733]: E0702 00:14:20.182622 2733 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 00:14:20.182700 kubelet[2733]: E0702 00:14:20.182707 2733 projected.go:198] Error preparing data for projected volume kube-api-access-sh5z9 for pod kube-system/cilium-operator-6bc8ccdb58-8xmhs: configmap "kube-root-ca.crt" not found Jul 2 00:14:20.182932 kubelet[2733]: E0702 00:14:20.182804 2733 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8aef32bc-e690-43c4-b76e-9e03c5399342-kube-api-access-sh5z9 podName:8aef32bc-e690-43c4-b76e-9e03c5399342 nodeName:}" failed. No retries permitted until 2024-07-02 00:14:20.682768624 +0000 UTC m=+12.446127001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sh5z9" (UniqueName: "kubernetes.io/projected/8aef32bc-e690-43c4-b76e-9e03c5399342-kube-api-access-sh5z9") pod "cilium-operator-6bc8ccdb58-8xmhs" (UID: "8aef32bc-e690-43c4-b76e-9e03c5399342") : configmap "kube-root-ca.crt" not found Jul 2 00:14:20.232782 kubelet[2733]: I0702 00:14:20.232568 2733 topology_manager.go:215] "Topology Admit Handler" podUID="02a1d7a2-2add-4cb1-8435-e3636df4fa2b" podNamespace="kube-system" podName="kube-proxy-4vtst" Jul 2 00:14:20.232782 kubelet[2733]: I0702 00:14:20.232717 2733 topology_manager.go:215] "Topology Admit Handler" podUID="a2aa545f-861c-404e-9a40-8ebd336d2136" podNamespace="kube-system" podName="cilium-pntv8" Jul 2 00:14:20.276832 kubelet[2733]: I0702 00:14:20.276786 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-config-path\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.276960 kubelet[2733]: I0702 00:14:20.276846 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02a1d7a2-2add-4cb1-8435-e3636df4fa2b-xtables-lock\") pod \"kube-proxy-4vtst\" (UID: \"02a1d7a2-2add-4cb1-8435-e3636df4fa2b\") " pod="kube-system/kube-proxy-4vtst" Jul 2 00:14:20.276960 kubelet[2733]: I0702 00:14:20.276871 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-host-proc-sys-kernel\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.276960 kubelet[2733]: I0702 00:14:20.276889 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02a1d7a2-2add-4cb1-8435-e3636df4fa2b-kube-proxy\") pod \"kube-proxy-4vtst\" (UID: \"02a1d7a2-2add-4cb1-8435-e3636df4fa2b\") " pod="kube-system/kube-proxy-4vtst" Jul 2 00:14:20.276960 kubelet[2733]: I0702 00:14:20.276907 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-lib-modules\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.276960 kubelet[2733]: I0702 00:14:20.276929 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02a1d7a2-2add-4cb1-8435-e3636df4fa2b-lib-modules\") pod \"kube-proxy-4vtst\" (UID: \"02a1d7a2-2add-4cb1-8435-e3636df4fa2b\") " pod="kube-system/kube-proxy-4vtst" Jul 2 00:14:20.276960 kubelet[2733]: I0702 00:14:20.276952 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-etc-cni-netd\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277102 kubelet[2733]: I0702 00:14:20.276977 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2aa545f-861c-404e-9a40-8ebd336d2136-clustermesh-secrets\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277102 kubelet[2733]: I0702 00:14:20.276996 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czcjp\" (UniqueName: \"kubernetes.io/projected/a2aa545f-861c-404e-9a40-8ebd336d2136-kube-api-access-czcjp\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277102 kubelet[2733]: I0702 00:14:20.277023 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hps8l\" (UniqueName: \"kubernetes.io/projected/02a1d7a2-2add-4cb1-8435-e3636df4fa2b-kube-api-access-hps8l\") pod \"kube-proxy-4vtst\" (UID: \"02a1d7a2-2add-4cb1-8435-e3636df4fa2b\") " pod="kube-system/kube-proxy-4vtst" Jul 2 00:14:20.277102 kubelet[2733]: I0702 00:14:20.277049 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-bpf-maps\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277102 kubelet[2733]: I0702 00:14:20.277070 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-xtables-lock\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277229 kubelet[2733]: I0702 00:14:20.277093 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-host-proc-sys-net\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277229 kubelet[2733]: I0702 00:14:20.277116 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2aa545f-861c-404e-9a40-8ebd336d2136-hubble-tls\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277229 kubelet[2733]: I0702 00:14:20.277157 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cni-path\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277229 kubelet[2733]: I0702 00:14:20.277193 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-run\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277229 kubelet[2733]: I0702 00:14:20.277218 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-hostproc\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.277340 kubelet[2733]: I0702 00:14:20.277240 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-cgroup\") pod \"cilium-pntv8\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " pod="kube-system/cilium-pntv8" Jul 2 00:14:20.839559 kubelet[2733]: E0702 00:14:20.839531 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:20.840123 containerd[1581]: time="2024-07-02T00:14:20.839952134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4vtst,Uid:02a1d7a2-2add-4cb1-8435-e3636df4fa2b,Namespace:kube-system,Attempt:0,}" Jul 2 00:14:20.842566 kubelet[2733]: E0702 00:14:20.842379 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:20.842976 containerd[1581]: time="2024-07-02T00:14:20.842932485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pntv8,Uid:a2aa545f-861c-404e-9a40-8ebd336d2136,Namespace:kube-system,Attempt:0,}" Jul 2 00:14:21.049364 containerd[1581]: time="2024-07-02T00:14:21.049256858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:14:21.049364 containerd[1581]: time="2024-07-02T00:14:21.049317783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:21.049364 containerd[1581]: time="2024-07-02T00:14:21.049332781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:14:21.049364 containerd[1581]: time="2024-07-02T00:14:21.049344212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:21.059188 kubelet[2733]: E0702 00:14:21.059131 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:21.060365 containerd[1581]: time="2024-07-02T00:14:21.059796955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-8xmhs,Uid:8aef32bc-e690-43c4-b76e-9e03c5399342,Namespace:kube-system,Attempt:0,}" Jul 2 00:14:21.085961 containerd[1581]: time="2024-07-02T00:14:21.085281347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:14:21.085961 containerd[1581]: time="2024-07-02T00:14:21.085328235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:21.085961 containerd[1581]: time="2024-07-02T00:14:21.085343233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:14:21.085961 containerd[1581]: time="2024-07-02T00:14:21.085353242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:21.095346 containerd[1581]: time="2024-07-02T00:14:21.095257243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4vtst,Uid:02a1d7a2-2add-4cb1-8435-e3636df4fa2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"830f57a98784523e7620990567720b940e8d9584c6912e17ce92180249741c15\"" Jul 2 00:14:21.096328 kubelet[2733]: E0702 00:14:21.096296 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:21.099964 containerd[1581]: time="2024-07-02T00:14:21.099925626Z" level=info msg="CreateContainer within sandbox \"830f57a98784523e7620990567720b940e8d9584c6912e17ce92180249741c15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:14:21.132563 containerd[1581]: time="2024-07-02T00:14:21.132501944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pntv8,Uid:a2aa545f-861c-404e-9a40-8ebd336d2136,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\"" Jul 2 00:14:21.133424 kubelet[2733]: E0702 00:14:21.133370 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:21.137974 containerd[1581]: time="2024-07-02T00:14:21.137920267Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:14:21.323676 containerd[1581]: time="2024-07-02T00:14:21.323557029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:14:21.323676 containerd[1581]: time="2024-07-02T00:14:21.323645185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:21.323856 containerd[1581]: time="2024-07-02T00:14:21.323667046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:14:21.323856 containerd[1581]: time="2024-07-02T00:14:21.323680361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:21.382969 containerd[1581]: time="2024-07-02T00:14:21.382766456Z" level=info msg="CreateContainer within sandbox \"830f57a98784523e7620990567720b940e8d9584c6912e17ce92180249741c15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e9561379de7764c2c1312d7e9271a8a6ff0271595ea668f6fa78d1afef01069\"" Jul 2 00:14:21.384927 containerd[1581]: time="2024-07-02T00:14:21.384367495Z" level=info msg="StartContainer for \"3e9561379de7764c2c1312d7e9271a8a6ff0271595ea668f6fa78d1afef01069\"" Jul 2 00:14:21.388159 containerd[1581]: time="2024-07-02T00:14:21.386752467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-8xmhs,Uid:8aef32bc-e690-43c4-b76e-9e03c5399342,Namespace:kube-system,Attempt:0,} returns sandbox id \"378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee\"" Jul 2 00:14:21.389092 kubelet[2733]: E0702 00:14:21.389059 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:21.541978 containerd[1581]: time="2024-07-02T00:14:21.541941401Z" level=info msg="StartContainer for \"3e9561379de7764c2c1312d7e9271a8a6ff0271595ea668f6fa78d1afef01069\" returns successfully" Jul 2 00:14:22.365518 kubelet[2733]: E0702 00:14:22.365478 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:22.653569 kubelet[2733]: I0702 00:14:22.653299 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4vtst" podStartSLOduration=2.6532519199999998 podCreationTimestamp="2024-07-02 00:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:14:22.65268172 +0000 UTC m=+14.416040097" watchObservedRunningTime="2024-07-02 00:14:22.65325192 +0000 UTC m=+14.416610297" Jul 2 00:14:23.368685 kubelet[2733]: E0702 00:14:23.368658 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:26.290580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475647755.mount: Deactivated successfully. Jul 2 00:14:35.489341 containerd[1581]: time="2024-07-02T00:14:35.489186854Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:14:35.498063 containerd[1581]: time="2024-07-02T00:14:35.497943372Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735351" Jul 2 00:14:35.557564 containerd[1581]: time="2024-07-02T00:14:35.557491986Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:14:35.559869 containerd[1581]: time="2024-07-02T00:14:35.559781244Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.42176116s" Jul 2 00:14:35.559869 containerd[1581]: time="2024-07-02T00:14:35.559846877Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:14:35.560749 containerd[1581]: time="2024-07-02T00:14:35.560416667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:14:35.561670 containerd[1581]: time="2024-07-02T00:14:35.561642297Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:14:36.200060 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:56992.service - OpenSSH per-connection server daemon (10.0.0.1:56992). Jul 2 00:14:36.276997 sshd[3121]: Accepted publickey for core from 10.0.0.1 port 56992 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:14:36.278559 sshd[3121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:36.286697 systemd-logind[1559]: New session 8 of user core. Jul 2 00:14:36.299124 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:14:36.700280 sshd[3121]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:36.705280 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:56992.service: Deactivated successfully. Jul 2 00:14:36.708149 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:14:36.708488 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:14:36.709652 systemd-logind[1559]: Removed session 8. Jul 2 00:14:37.748728 containerd[1581]: time="2024-07-02T00:14:37.748654974Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\"" Jul 2 00:14:37.749281 containerd[1581]: time="2024-07-02T00:14:37.749227870Z" level=info msg="StartContainer for \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\"" Jul 2 00:14:38.010172 containerd[1581]: time="2024-07-02T00:14:38.009649892Z" level=info msg="StartContainer for \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\" returns successfully" Jul 2 00:14:38.247137 containerd[1581]: time="2024-07-02T00:14:38.247060270Z" level=info msg="shim disconnected" id=b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36 namespace=k8s.io Jul 2 00:14:38.247137 containerd[1581]: time="2024-07-02T00:14:38.247126975Z" level=warning msg="cleaning up after shim disconnected" id=b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36 namespace=k8s.io Jul 2 00:14:38.247137 containerd[1581]: time="2024-07-02T00:14:38.247139629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:38.337667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36-rootfs.mount: Deactivated successfully. Jul 2 00:14:38.511166 kubelet[2733]: E0702 00:14:38.511141 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:38.513000 containerd[1581]: time="2024-07-02T00:14:38.512968232Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:14:39.406366 containerd[1581]: time="2024-07-02T00:14:39.406285461Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\"" Jul 2 00:14:39.406953 containerd[1581]: time="2024-07-02T00:14:39.406903100Z" level=info msg="StartContainer for \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\"" Jul 2 00:14:39.471059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:14:39.471902 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:14:39.471985 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:14:39.483127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:14:39.569325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:14:39.581380 containerd[1581]: time="2024-07-02T00:14:39.581319049Z" level=info msg="StartContainer for \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\" returns successfully" Jul 2 00:14:39.602713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f-rootfs.mount: Deactivated successfully. Jul 2 00:14:39.778013 containerd[1581]: time="2024-07-02T00:14:39.777942713Z" level=info msg="shim disconnected" id=f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f namespace=k8s.io Jul 2 00:14:39.778013 containerd[1581]: time="2024-07-02T00:14:39.777995331Z" level=warning msg="cleaning up after shim disconnected" id=f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f namespace=k8s.io Jul 2 00:14:39.778013 containerd[1581]: time="2024-07-02T00:14:39.778004368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:40.587250 kubelet[2733]: E0702 00:14:40.587196 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:40.590050 containerd[1581]: time="2024-07-02T00:14:40.589994734Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:14:40.703548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635868806.mount: Deactivated successfully. Jul 2 00:14:40.859304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942554652.mount: Deactivated successfully. Jul 2 00:14:40.977472 containerd[1581]: time="2024-07-02T00:14:40.977406994Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\"" Jul 2 00:14:40.978216 containerd[1581]: time="2024-07-02T00:14:40.978147985Z" level=info msg="StartContainer for \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\"" Jul 2 00:14:41.132253 containerd[1581]: time="2024-07-02T00:14:41.132091018Z" level=info msg="StartContainer for \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\" returns successfully" Jul 2 00:14:41.268134 containerd[1581]: time="2024-07-02T00:14:41.268071966Z" level=info msg="shim disconnected" id=403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b namespace=k8s.io Jul 2 00:14:41.268134 containerd[1581]: time="2024-07-02T00:14:41.268127721Z" level=warning msg="cleaning up after shim disconnected" id=403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b namespace=k8s.io Jul 2 00:14:41.268134 containerd[1581]: time="2024-07-02T00:14:41.268136577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:41.591260 kubelet[2733]: E0702 00:14:41.591234 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:41.601059 containerd[1581]: time="2024-07-02T00:14:41.601012705Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:14:41.699847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b-rootfs.mount: Deactivated successfully. Jul 2 00:14:41.709048 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:52428.service - OpenSSH per-connection server daemon (10.0.0.1:52428). Jul 2 00:14:41.740762 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 52428 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:14:41.742163 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:41.746017 systemd-logind[1559]: New session 9 of user core. Jul 2 00:14:41.760039 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:14:41.873334 sshd[3333]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:41.877262 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:52428.service: Deactivated successfully. Jul 2 00:14:41.879610 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:14:41.879756 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:14:41.880655 systemd-logind[1559]: Removed session 9. Jul 2 00:14:41.933856 containerd[1581]: time="2024-07-02T00:14:41.933797110Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\"" Jul 2 00:14:41.934392 containerd[1581]: time="2024-07-02T00:14:41.934367120Z" level=info msg="StartContainer for \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\"" Jul 2 00:14:42.015838 containerd[1581]: time="2024-07-02T00:14:42.015770899Z" level=info msg="StartContainer for \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\" returns successfully" Jul 2 00:14:42.032607 containerd[1581]: time="2024-07-02T00:14:42.032551444Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:14:42.434188 containerd[1581]: time="2024-07-02T00:14:42.434096875Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Jul 2 00:14:42.508481 containerd[1581]: time="2024-07-02T00:14:42.508392416Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:14:42.510465 containerd[1581]: time="2024-07-02T00:14:42.510422649Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.949966597s" Jul 2 00:14:42.510465 containerd[1581]: time="2024-07-02T00:14:42.510464577Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:14:42.512767 containerd[1581]: time="2024-07-02T00:14:42.512720464Z" level=info msg="CreateContainer within sandbox \"378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:14:42.555125 containerd[1581]: time="2024-07-02T00:14:42.555063256Z" level=info msg="shim disconnected" id=2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e namespace=k8s.io Jul 2 00:14:42.555125 containerd[1581]: time="2024-07-02T00:14:42.555123569Z" level=warning msg="cleaning up after shim disconnected" id=2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e namespace=k8s.io Jul 2 00:14:42.555125 containerd[1581]: time="2024-07-02T00:14:42.555136373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:14:42.650033 kubelet[2733]: E0702 00:14:42.650004 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:42.651625 containerd[1581]: time="2024-07-02T00:14:42.651593309Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:14:42.701831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e-rootfs.mount: Deactivated successfully. Jul 2 00:14:42.946645 containerd[1581]: time="2024-07-02T00:14:42.946581152Z" level=info msg="CreateContainer within sandbox \"378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\"" Jul 2 00:14:42.947328 containerd[1581]: time="2024-07-02T00:14:42.947284376Z" level=info msg="StartContainer for \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\"" Jul 2 00:14:43.057764 containerd[1581]: time="2024-07-02T00:14:43.055325892Z" level=info msg="CreateContainer within sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\"" Jul 2 00:14:43.057764 containerd[1581]: time="2024-07-02T00:14:43.055476012Z" level=info msg="StartContainer for \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\" returns successfully" Jul 2 00:14:43.057764 containerd[1581]: time="2024-07-02T00:14:43.056090366Z" level=info msg="StartContainer for \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\"" Jul 2 00:14:43.193076 containerd[1581]: time="2024-07-02T00:14:43.192521792Z" level=info msg="StartContainer for \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\" returns successfully" Jul 2 00:14:43.334314 kubelet[2733]: I0702 00:14:43.334202 2733 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:14:43.504626 kubelet[2733]: I0702 00:14:43.504572 2733 topology_manager.go:215] "Topology Admit Handler" podUID="004e6944-5183-47cd-91ea-9554f744ae1f" podNamespace="kube-system" podName="coredns-5dd5756b68-vkk68" Jul 2 00:14:43.511580 kubelet[2733]: I0702 00:14:43.511539 2733 topology_manager.go:215] "Topology Admit Handler" podUID="618943de-f817-422d-9213-ae882afa3850" podNamespace="kube-system" podName="coredns-5dd5756b68-24mrs" Jul 2 00:14:43.662191 kubelet[2733]: E0702 00:14:43.661835 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:43.666859 kubelet[2733]: E0702 00:14:43.666510 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:43.681181 kubelet[2733]: I0702 00:14:43.681030 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndzj\" (UniqueName: \"kubernetes.io/projected/004e6944-5183-47cd-91ea-9554f744ae1f-kube-api-access-tndzj\") pod \"coredns-5dd5756b68-vkk68\" (UID: \"004e6944-5183-47cd-91ea-9554f744ae1f\") " pod="kube-system/coredns-5dd5756b68-vkk68" Jul 2 00:14:43.681978 kubelet[2733]: I0702 00:14:43.681408 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/618943de-f817-422d-9213-ae882afa3850-config-volume\") pod \"coredns-5dd5756b68-24mrs\" (UID: \"618943de-f817-422d-9213-ae882afa3850\") " pod="kube-system/coredns-5dd5756b68-24mrs" Jul 2 00:14:43.681978 kubelet[2733]: I0702 00:14:43.681448 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvth9\" (UniqueName: \"kubernetes.io/projected/618943de-f817-422d-9213-ae882afa3850-kube-api-access-pvth9\") pod \"coredns-5dd5756b68-24mrs\" (UID: \"618943de-f817-422d-9213-ae882afa3850\") " pod="kube-system/coredns-5dd5756b68-24mrs" Jul 2 00:14:43.681978 kubelet[2733]: I0702 00:14:43.681486 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/004e6944-5183-47cd-91ea-9554f744ae1f-config-volume\") pod \"coredns-5dd5756b68-vkk68\" (UID: \"004e6944-5183-47cd-91ea-9554f744ae1f\") " pod="kube-system/coredns-5dd5756b68-vkk68" Jul 2 00:14:43.711640 kubelet[2733]: I0702 00:14:43.711351 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-8xmhs" podStartSLOduration=3.584814516 podCreationTimestamp="2024-07-02 00:14:19 +0000 UTC" firstStartedPulling="2024-07-02 00:14:21.39110707 +0000 UTC m=+13.154465447" lastFinishedPulling="2024-07-02 00:14:42.510718325 +0000 UTC m=+34.274076702" observedRunningTime="2024-07-02 00:14:43.704110643 +0000 UTC m=+35.467469030" watchObservedRunningTime="2024-07-02 00:14:43.704425771 +0000 UTC m=+35.467784138" Jul 2 00:14:44.124891 kubelet[2733]: E0702 00:14:44.124762 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:44.126228 kubelet[2733]: E0702 00:14:44.126203 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:44.127281 containerd[1581]: time="2024-07-02T00:14:44.127238950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-24mrs,Uid:618943de-f817-422d-9213-ae882afa3850,Namespace:kube-system,Attempt:0,}" Jul 2 00:14:44.131048 containerd[1581]: time="2024-07-02T00:14:44.131002805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vkk68,Uid:004e6944-5183-47cd-91ea-9554f744ae1f,Namespace:kube-system,Attempt:0,}" Jul 2 00:14:44.669027 kubelet[2733]: E0702 00:14:44.668993 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:44.669529 kubelet[2733]: E0702 00:14:44.669105 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:45.553760 systemd-networkd[1245]: cilium_host: Link UP Jul 2 00:14:45.554566 systemd-networkd[1245]: cilium_net: Link UP Jul 2 00:14:45.554874 systemd-networkd[1245]: cilium_net: Gained carrier Jul 2 00:14:45.555092 systemd-networkd[1245]: cilium_host: Gained carrier Jul 2 00:14:45.555262 systemd-networkd[1245]: cilium_net: Gained IPv6LL Jul 2 00:14:45.555958 systemd-networkd[1245]: cilium_host: Gained IPv6LL Jul 2 00:14:45.659628 systemd-networkd[1245]: cilium_vxlan: Link UP Jul 2 00:14:45.659638 systemd-networkd[1245]: cilium_vxlan: Gained carrier Jul 2 00:14:45.671009 kubelet[2733]: E0702 00:14:45.670984 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:45.873850 kernel: NET: Registered PF_ALG protocol family Jul 2 00:14:46.542603 systemd-networkd[1245]: lxc_health: Link UP Jul 2 00:14:46.554931 systemd-networkd[1245]: lxc_health: Gained carrier Jul 2 00:14:46.731985 systemd-networkd[1245]: cilium_vxlan: Gained IPv6LL Jul 2 00:14:46.844896 kubelet[2733]: E0702 00:14:46.844716 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:46.862230 kubelet[2733]: I0702 00:14:46.861576 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pntv8" podStartSLOduration=12.435208006 podCreationTimestamp="2024-07-02 00:14:20 +0000 UTC" firstStartedPulling="2024-07-02 00:14:21.133898609 +0000 UTC m=+12.897256986" lastFinishedPulling="2024-07-02 00:14:35.560219516 +0000 UTC m=+27.323577893" observedRunningTime="2024-07-02 00:14:43.87506866 +0000 UTC m=+35.638427047" watchObservedRunningTime="2024-07-02 00:14:46.861528913 +0000 UTC m=+38.624887290" Jul 2 00:14:46.882079 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:52546.service - OpenSSH per-connection server daemon (10.0.0.1:52546). Jul 2 00:14:46.935604 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 52546 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:14:46.937310 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:46.942514 systemd-logind[1559]: New session 10 of user core. Jul 2 00:14:46.950152 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:14:47.080122 sshd[3936]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:47.083884 systemd-networkd[1245]: lxc279757603cb9: Link UP Jul 2 00:14:47.098008 systemd-networkd[1245]: lxc6cb5177f178f: Link UP Jul 2 00:14:47.099515 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:52546.service: Deactivated successfully. Jul 2 00:14:47.103151 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:14:47.103641 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:14:47.104967 systemd-logind[1559]: Removed session 10. Jul 2 00:14:47.107978 kernel: eth0: renamed from tmp9fb60 Jul 2 00:14:47.113856 kernel: eth0: renamed from tmp044bf Jul 2 00:14:47.121574 systemd-networkd[1245]: lxc6cb5177f178f: Gained carrier Jul 2 00:14:47.122346 systemd-networkd[1245]: lxc279757603cb9: Gained carrier Jul 2 00:14:47.674222 kubelet[2733]: E0702 00:14:47.674187 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:48.270993 systemd-networkd[1245]: lxc6cb5177f178f: Gained IPv6LL Jul 2 00:14:48.331980 systemd-networkd[1245]: lxc_health: Gained IPv6LL Jul 2 00:14:48.396094 systemd-networkd[1245]: lxc279757603cb9: Gained IPv6LL Jul 2 00:14:48.676412 kubelet[2733]: E0702 00:14:48.676368 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:50.919780 containerd[1581]: time="2024-07-02T00:14:50.919426059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:14:50.919780 containerd[1581]: time="2024-07-02T00:14:50.919588350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:50.919780 containerd[1581]: time="2024-07-02T00:14:50.919615382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:14:50.919780 containerd[1581]: time="2024-07-02T00:14:50.919629299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:50.920364 containerd[1581]: time="2024-07-02T00:14:50.919929034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:14:50.920364 containerd[1581]: time="2024-07-02T00:14:50.920098108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:50.920364 containerd[1581]: time="2024-07-02T00:14:50.920151902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:14:50.920364 containerd[1581]: time="2024-07-02T00:14:50.920176700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:14:50.949868 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:14:50.955071 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:14:50.977672 containerd[1581]: time="2024-07-02T00:14:50.977627813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-24mrs,Uid:618943de-f817-422d-9213-ae882afa3850,Namespace:kube-system,Attempt:0,} returns sandbox id \"044bf5335713eedc95703d76bd4a68902fb800b450848af5b91db4223bffd2e3\"" Jul 2 00:14:50.978261 kubelet[2733]: E0702 00:14:50.978232 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:50.982406 containerd[1581]: time="2024-07-02T00:14:50.982363414Z" level=info msg="CreateContainer within sandbox \"044bf5335713eedc95703d76bd4a68902fb800b450848af5b91db4223bffd2e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:14:50.988731 containerd[1581]: time="2024-07-02T00:14:50.988669960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vkk68,Uid:004e6944-5183-47cd-91ea-9554f744ae1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fb60481f20cafb0eb1f3e6219ff6b8c8db0a3dfeeb80068f957ab7f4e56be02\"" Jul 2 00:14:50.990073 kubelet[2733]: E0702 00:14:50.990037 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:50.992048 containerd[1581]: time="2024-07-02T00:14:50.992007407Z" level=info msg="CreateContainer within sandbox \"9fb60481f20cafb0eb1f3e6219ff6b8c8db0a3dfeeb80068f957ab7f4e56be02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:14:51.562124 containerd[1581]: time="2024-07-02T00:14:51.562026816Z" level=info msg="CreateContainer within sandbox \"9fb60481f20cafb0eb1f3e6219ff6b8c8db0a3dfeeb80068f957ab7f4e56be02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80cdeec78e5669968888e3629e09d65e29bc3efa9f6076be6ac6dbdb36f0dc40\"" Jul 2 00:14:51.562765 containerd[1581]: time="2024-07-02T00:14:51.562571712Z" level=info msg="StartContainer for \"80cdeec78e5669968888e3629e09d65e29bc3efa9f6076be6ac6dbdb36f0dc40\"" Jul 2 00:14:51.581965 containerd[1581]: time="2024-07-02T00:14:51.581878721Z" level=info msg="CreateContainer within sandbox \"044bf5335713eedc95703d76bd4a68902fb800b450848af5b91db4223bffd2e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f70b1fc90e49730107c186b5f6f38246b29cc9f57c1e8b80c4ec07fc8231ade\"" Jul 2 00:14:51.582682 containerd[1581]: time="2024-07-02T00:14:51.582650372Z" level=info msg="StartContainer for \"7f70b1fc90e49730107c186b5f6f38246b29cc9f57c1e8b80c4ec07fc8231ade\"" Jul 2 00:14:51.732272 containerd[1581]: time="2024-07-02T00:14:51.732193822Z" level=info msg="StartContainer for \"80cdeec78e5669968888e3629e09d65e29bc3efa9f6076be6ac6dbdb36f0dc40\" returns successfully" Jul 2 00:14:51.732272 containerd[1581]: time="2024-07-02T00:14:51.732193812Z" level=info msg="StartContainer for \"7f70b1fc90e49730107c186b5f6f38246b29cc9f57c1e8b80c4ec07fc8231ade\" returns successfully" Jul 2 00:14:51.735916 kubelet[2733]: E0702 00:14:51.735831 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:52.089124 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:59760.service - OpenSSH per-connection server daemon (10.0.0.1:59760). Jul 2 00:14:52.122874 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 59760 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:14:52.125125 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:52.130035 systemd-logind[1559]: New session 11 of user core. Jul 2 00:14:52.145253 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:14:52.271290 sshd[4147]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:52.276350 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:59760.service: Deactivated successfully. Jul 2 00:14:52.278866 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:14:52.279054 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:14:52.280110 systemd-logind[1559]: Removed session 11. Jul 2 00:14:52.737792 kubelet[2733]: E0702 00:14:52.737509 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:52.737792 kubelet[2733]: E0702 00:14:52.737545 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:52.764088 kubelet[2733]: I0702 00:14:52.763832 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vkk68" podStartSLOduration=33.763768832 podCreationTimestamp="2024-07-02 00:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:14:51.771851013 +0000 UTC m=+43.535209420" watchObservedRunningTime="2024-07-02 00:14:52.763768832 +0000 UTC m=+44.527127209" Jul 2 00:14:52.764088 kubelet[2733]: I0702 00:14:52.763944 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-24mrs" podStartSLOduration=33.763910513 podCreationTimestamp="2024-07-02 00:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:14:52.762527351 +0000 UTC m=+44.525885738" watchObservedRunningTime="2024-07-02 00:14:52.763910513 +0000 UTC m=+44.527268890" Jul 2 00:14:53.738757 kubelet[2733]: E0702 00:14:53.738711 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:54.127312 kubelet[2733]: E0702 00:14:54.127194 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:54.740679 kubelet[2733]: E0702 00:14:54.740636 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:14:57.290186 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:59774.service - OpenSSH per-connection server daemon (10.0.0.1:59774). Jul 2 00:14:57.322975 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 59774 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:14:57.324681 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:14:57.329265 systemd-logind[1559]: New session 12 of user core. Jul 2 00:14:57.337146 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:14:57.463840 sshd[4173]: pam_unix(sshd:session): session closed for user core Jul 2 00:14:57.468082 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:59774.service: Deactivated successfully. Jul 2 00:14:57.470728 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:14:57.470774 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:14:57.471921 systemd-logind[1559]: Removed session 12. Jul 2 00:15:02.477103 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:51972.service - OpenSSH per-connection server daemon (10.0.0.1:51972). Jul 2 00:15:02.512437 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 51972 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:02.514168 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:02.518615 systemd-logind[1559]: New session 13 of user core. Jul 2 00:15:02.525181 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:15:02.649216 sshd[4189]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:02.659327 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:51976.service - OpenSSH per-connection server daemon (10.0.0.1:51976). Jul 2 00:15:02.660419 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:51972.service: Deactivated successfully. Jul 2 00:15:02.663343 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:15:02.665232 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:15:02.666245 systemd-logind[1559]: Removed session 13. Jul 2 00:15:02.692247 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 51976 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:02.693855 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:02.698082 systemd-logind[1559]: New session 14 of user core. Jul 2 00:15:02.712247 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:15:03.582787 sshd[4203]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:03.591750 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:51988.service - OpenSSH per-connection server daemon (10.0.0.1:51988). Jul 2 00:15:03.592383 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:51976.service: Deactivated successfully. Jul 2 00:15:03.601095 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:15:03.603046 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:15:03.604141 systemd-logind[1559]: Removed session 14. Jul 2 00:15:03.633488 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 51988 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:03.635191 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:03.639576 systemd-logind[1559]: New session 15 of user core. Jul 2 00:15:03.647080 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:15:03.761105 sshd[4215]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:03.766913 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:51988.service: Deactivated successfully. Jul 2 00:15:03.770128 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:15:03.771022 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:15:03.772128 systemd-logind[1559]: Removed session 15. Jul 2 00:15:08.781025 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:33722.service - OpenSSH per-connection server daemon (10.0.0.1:33722). Jul 2 00:15:08.814784 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 33722 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:08.816866 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:08.823365 systemd-logind[1559]: New session 16 of user core. Jul 2 00:15:08.832277 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:15:08.953673 sshd[4236]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:08.958945 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:33722.service: Deactivated successfully. Jul 2 00:15:08.962122 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:15:08.963111 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:15:08.964581 systemd-logind[1559]: Removed session 16. Jul 2 00:15:13.964120 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:33738.service - OpenSSH per-connection server daemon (10.0.0.1:33738). Jul 2 00:15:13.997433 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 33738 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:13.999220 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:14.003387 systemd-logind[1559]: New session 17 of user core. Jul 2 00:15:14.010151 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:15:14.116079 sshd[4251]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:14.128232 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:33754.service - OpenSSH per-connection server daemon (10.0.0.1:33754). Jul 2 00:15:14.128905 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:33738.service: Deactivated successfully. Jul 2 00:15:14.131282 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:15:14.133182 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:15:14.134568 systemd-logind[1559]: Removed session 17. Jul 2 00:15:14.161143 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 33754 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:14.162670 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:14.166627 systemd-logind[1559]: New session 18 of user core. Jul 2 00:15:14.172180 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:15:14.600207 sshd[4263]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:14.608052 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:33762.service - OpenSSH per-connection server daemon (10.0.0.1:33762). Jul 2 00:15:14.608668 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:33754.service: Deactivated successfully. Jul 2 00:15:14.610924 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:15:14.612598 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:15:14.613661 systemd-logind[1559]: Removed session 18. Jul 2 00:15:14.642377 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 33762 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:14.644138 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:14.648383 systemd-logind[1559]: New session 19 of user core. Jul 2 00:15:14.657300 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:15:15.706374 sshd[4277]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:15.724198 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:33778.service - OpenSSH per-connection server daemon (10.0.0.1:33778). Jul 2 00:15:15.727776 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:33762.service: Deactivated successfully. Jul 2 00:15:15.733541 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:15:15.740391 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:15:15.741632 systemd-logind[1559]: Removed session 19. Jul 2 00:15:15.766710 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 33778 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:15.768842 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:15.773632 systemd-logind[1559]: New session 20 of user core. Jul 2 00:15:15.780276 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:15:16.179450 sshd[4298]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:16.189202 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:33792.service - OpenSSH per-connection server daemon (10.0.0.1:33792). Jul 2 00:15:16.189900 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:33778.service: Deactivated successfully. Jul 2 00:15:16.193352 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:15:16.195375 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:15:16.196754 systemd-logind[1559]: Removed session 20. Jul 2 00:15:16.223201 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 33792 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:16.225188 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:16.230448 systemd-logind[1559]: New session 21 of user core. Jul 2 00:15:16.238154 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:15:16.348518 sshd[4311]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:16.352462 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:33792.service: Deactivated successfully. Jul 2 00:15:16.355017 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:15:16.355874 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:15:16.356830 systemd-logind[1559]: Removed session 21. Jul 2 00:15:18.322247 kubelet[2733]: E0702 00:15:18.322189 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:21.363138 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:39540.service - OpenSSH per-connection server daemon (10.0.0.1:39540). Jul 2 00:15:21.396364 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 39540 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:21.398407 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:21.405394 systemd-logind[1559]: New session 22 of user core. Jul 2 00:15:21.413353 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:15:21.547350 sshd[4329]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:21.552342 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:39540.service: Deactivated successfully. Jul 2 00:15:21.556918 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:15:21.557084 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:15:21.558863 systemd-logind[1559]: Removed session 22. Jul 2 00:15:22.322911 kubelet[2733]: E0702 00:15:22.322852 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:26.557048 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:39542.service - OpenSSH per-connection server daemon (10.0.0.1:39542). Jul 2 00:15:26.592490 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 39542 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:26.594199 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:26.598269 systemd-logind[1559]: New session 23 of user core. Jul 2 00:15:26.606068 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:15:26.708770 sshd[4349]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:26.713084 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:39542.service: Deactivated successfully. Jul 2 00:15:26.715916 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:15:26.716825 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:15:26.717716 systemd-logind[1559]: Removed session 23. Jul 2 00:15:31.729094 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:35638.service - OpenSSH per-connection server daemon (10.0.0.1:35638). Jul 2 00:15:31.760385 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 35638 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:31.762032 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:31.766403 systemd-logind[1559]: New session 24 of user core. Jul 2 00:15:31.778104 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:15:31.885035 sshd[4364]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:31.889456 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:35638.service: Deactivated successfully. Jul 2 00:15:31.892211 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:15:31.892312 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:15:31.893426 systemd-logind[1559]: Removed session 24. Jul 2 00:15:36.900278 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:35646.service - OpenSSH per-connection server daemon (10.0.0.1:35646). Jul 2 00:15:36.936414 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 35646 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:36.938177 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:36.942838 systemd-logind[1559]: New session 25 of user core. Jul 2 00:15:36.948287 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:15:37.070334 sshd[4379]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:37.078388 systemd[1]: Started sshd@25-10.0.0.45:22-10.0.0.1:35654.service - OpenSSH per-connection server daemon (10.0.0.1:35654). Jul 2 00:15:37.079052 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:35646.service: Deactivated successfully. Jul 2 00:15:37.086472 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:15:37.087895 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:15:37.089391 systemd-logind[1559]: Removed session 25. Jul 2 00:15:37.113503 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 35654 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:37.115490 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:37.120084 systemd-logind[1559]: New session 26 of user core. Jul 2 00:15:37.131127 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:15:38.902502 containerd[1581]: time="2024-07-02T00:15:38.902173428Z" level=info msg="StopContainer for \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\" with timeout 30 (s)" Jul 2 00:15:38.923865 containerd[1581]: time="2024-07-02T00:15:38.922359343Z" level=info msg="Stop container \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\" with signal terminated" Jul 2 00:15:38.953198 containerd[1581]: time="2024-07-02T00:15:38.953128361Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:15:38.961302 containerd[1581]: time="2024-07-02T00:15:38.961239373Z" level=info msg="StopContainer for \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\" with timeout 2 (s)" Jul 2 00:15:38.961589 containerd[1581]: time="2024-07-02T00:15:38.961568564Z" level=info msg="Stop container \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\" with signal terminated" Jul 2 00:15:38.969189 systemd-networkd[1245]: lxc_health: Link DOWN Jul 2 00:15:38.969196 systemd-networkd[1245]: lxc_health: Lost carrier Jul 2 00:15:38.970936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308-rootfs.mount: Deactivated successfully. Jul 2 00:15:38.987053 containerd[1581]: time="2024-07-02T00:15:38.986951903Z" level=info msg="shim disconnected" id=6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308 namespace=k8s.io Jul 2 00:15:38.987053 containerd[1581]: time="2024-07-02T00:15:38.987042955Z" level=warning msg="cleaning up after shim disconnected" id=6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308 namespace=k8s.io Jul 2 00:15:38.987053 containerd[1581]: time="2024-07-02T00:15:38.987054938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:15:39.013369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905-rootfs.mount: Deactivated successfully. Jul 2 00:15:39.018829 containerd[1581]: time="2024-07-02T00:15:39.018724373Z" level=info msg="shim disconnected" id=136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905 namespace=k8s.io Jul 2 00:15:39.018829 containerd[1581]: time="2024-07-02T00:15:39.018795257Z" level=warning msg="cleaning up after shim disconnected" id=136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905 namespace=k8s.io Jul 2 00:15:39.018829 containerd[1581]: time="2024-07-02T00:15:39.018829813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:15:39.019060 containerd[1581]: time="2024-07-02T00:15:39.018949228Z" level=info msg="StopContainer for \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\" returns successfully" Jul 2 00:15:39.023068 containerd[1581]: time="2024-07-02T00:15:39.022975427Z" level=info msg="StopPodSandbox for \"378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee\"" Jul 2 00:15:39.032285 containerd[1581]: time="2024-07-02T00:15:39.023039418Z" level=info msg="Container to stop \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:15:39.035185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee-shm.mount: Deactivated successfully. Jul 2 00:15:39.044314 containerd[1581]: time="2024-07-02T00:15:39.044267317Z" level=info msg="StopContainer for \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\" returns successfully" Jul 2 00:15:39.045055 containerd[1581]: time="2024-07-02T00:15:39.044832856Z" level=info msg="StopPodSandbox for \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\"" Jul 2 00:15:39.045055 containerd[1581]: time="2024-07-02T00:15:39.044872742Z" level=info msg="Container to stop \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:15:39.045055 containerd[1581]: time="2024-07-02T00:15:39.044912987Z" level=info msg="Container to stop \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:15:39.045055 containerd[1581]: time="2024-07-02T00:15:39.044922415Z" level=info msg="Container to stop \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:15:39.045055 containerd[1581]: time="2024-07-02T00:15:39.044933145Z" level=info msg="Container to stop \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:15:39.045055 containerd[1581]: time="2024-07-02T00:15:39.044941983Z" level=info msg="Container to stop \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:15:39.048326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d-shm.mount: Deactivated successfully. Jul 2 00:15:39.069312 containerd[1581]: time="2024-07-02T00:15:39.069187264Z" level=info msg="shim disconnected" id=378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee namespace=k8s.io Jul 2 00:15:39.069312 containerd[1581]: time="2024-07-02T00:15:39.069258989Z" level=warning msg="cleaning up after shim disconnected" id=378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee namespace=k8s.io Jul 2 00:15:39.069312 containerd[1581]: time="2024-07-02T00:15:39.069268457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:15:39.076312 containerd[1581]: time="2024-07-02T00:15:39.076106109Z" level=info msg="shim disconnected" id=8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d namespace=k8s.io Jul 2 00:15:39.076312 containerd[1581]: time="2024-07-02T00:15:39.076169559Z" level=warning msg="cleaning up after shim disconnected" id=8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d namespace=k8s.io Jul 2 00:15:39.076312 containerd[1581]: time="2024-07-02T00:15:39.076181291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:15:39.091443 containerd[1581]: time="2024-07-02T00:15:39.091379805Z" level=info msg="TearDown network for sandbox \"378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee\" successfully" Jul 2 00:15:39.091443 containerd[1581]: time="2024-07-02T00:15:39.091419691Z" level=info msg="StopPodSandbox for \"378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee\" returns successfully" Jul 2 00:15:39.095382 containerd[1581]: time="2024-07-02T00:15:39.095343536Z" level=info msg="TearDown network for sandbox \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" successfully" Jul 2 00:15:39.095382 containerd[1581]: time="2024-07-02T00:15:39.095373022Z" level=info msg="StopPodSandbox for \"8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d\" returns successfully" Jul 2 00:15:39.283232 kubelet[2733]: I0702 00:15:39.283173 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2aa545f-861c-404e-9a40-8ebd336d2136-hubble-tls\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.283232 kubelet[2733]: I0702 00:15:39.283225 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-host-proc-sys-net\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.283232 kubelet[2733]: I0702 00:15:39.283247 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-run\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.283918 kubelet[2733]: I0702 00:15:39.283271 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh5z9\" (UniqueName: \"kubernetes.io/projected/8aef32bc-e690-43c4-b76e-9e03c5399342-kube-api-access-sh5z9\") pod \"8aef32bc-e690-43c4-b76e-9e03c5399342\" (UID: \"8aef32bc-e690-43c4-b76e-9e03c5399342\") " Jul 2 00:15:39.283918 kubelet[2733]: I0702 00:15:39.283294 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-lib-modules\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.283918 kubelet[2733]: I0702 00:15:39.283313 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-bpf-maps\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.283918 kubelet[2733]: I0702 00:15:39.283333 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-xtables-lock\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.283918 kubelet[2733]: I0702 00:15:39.283359 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aef32bc-e690-43c4-b76e-9e03c5399342-cilium-config-path\") pod \"8aef32bc-e690-43c4-b76e-9e03c5399342\" (UID: \"8aef32bc-e690-43c4-b76e-9e03c5399342\") " Jul 2 00:15:39.283918 kubelet[2733]: I0702 00:15:39.283382 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-host-proc-sys-kernel\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.284108 kubelet[2733]: I0702 00:15:39.283419 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-etc-cni-netd\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.284108 kubelet[2733]: I0702 00:15:39.283442 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2aa545f-861c-404e-9a40-8ebd336d2136-clustermesh-secrets\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.284108 kubelet[2733]: I0702 00:15:39.283463 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cni-path\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.284108 kubelet[2733]: I0702 00:15:39.283487 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-config-path\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.284108 kubelet[2733]: I0702 00:15:39.283508 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-cgroup\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.284108 kubelet[2733]: I0702 00:15:39.283530 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-hostproc\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.284277 kubelet[2733]: I0702 00:15:39.283554 2733 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czcjp\" (UniqueName: \"kubernetes.io/projected/a2aa545f-861c-404e-9a40-8ebd336d2136-kube-api-access-czcjp\") pod \"a2aa545f-861c-404e-9a40-8ebd336d2136\" (UID: \"a2aa545f-861c-404e-9a40-8ebd336d2136\") " Jul 2 00:15:39.286839 kubelet[2733]: I0702 00:15:39.284345 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.286839 kubelet[2733]: I0702 00:15:39.284401 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.286839 kubelet[2733]: I0702 00:15:39.284429 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.286839 kubelet[2733]: I0702 00:15:39.285677 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.286839 kubelet[2733]: I0702 00:15:39.285713 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.287016 kubelet[2733]: I0702 00:15:39.285737 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.287016 kubelet[2733]: I0702 00:15:39.285756 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.287016 kubelet[2733]: I0702 00:15:39.285776 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.287016 kubelet[2733]: I0702 00:15:39.285798 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.287016 kubelet[2733]: I0702 00:15:39.285838 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:15:39.288134 kubelet[2733]: I0702 00:15:39.288102 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aef32bc-e690-43c4-b76e-9e03c5399342-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8aef32bc-e690-43c4-b76e-9e03c5399342" (UID: "8aef32bc-e690-43c4-b76e-9e03c5399342"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:15:39.290101 kubelet[2733]: I0702 00:15:39.290056 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2aa545f-861c-404e-9a40-8ebd336d2136-kube-api-access-czcjp" (OuterVolumeSpecName: "kube-api-access-czcjp") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "kube-api-access-czcjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:15:39.291501 kubelet[2733]: I0702 00:15:39.291469 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aef32bc-e690-43c4-b76e-9e03c5399342-kube-api-access-sh5z9" (OuterVolumeSpecName: "kube-api-access-sh5z9") pod "8aef32bc-e690-43c4-b76e-9e03c5399342" (UID: "8aef32bc-e690-43c4-b76e-9e03c5399342"). InnerVolumeSpecName "kube-api-access-sh5z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:15:39.291593 kubelet[2733]: I0702 00:15:39.291483 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:15:39.291632 kubelet[2733]: I0702 00:15:39.291615 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2aa545f-861c-404e-9a40-8ebd336d2136-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:15:39.292201 kubelet[2733]: I0702 00:15:39.292170 2733 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2aa545f-861c-404e-9a40-8ebd336d2136-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2aa545f-861c-404e-9a40-8ebd336d2136" (UID: "a2aa545f-861c-404e-9a40-8ebd336d2136"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:15:39.384683 kubelet[2733]: I0702 00:15:39.384612 2733 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384683 kubelet[2733]: I0702 00:15:39.384654 2733 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384683 kubelet[2733]: I0702 00:15:39.384670 2733 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sh5z9\" (UniqueName: \"kubernetes.io/projected/8aef32bc-e690-43c4-b76e-9e03c5399342-kube-api-access-sh5z9\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384683 kubelet[2733]: I0702 00:15:39.384693 2733 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384683 kubelet[2733]: I0702 00:15:39.384710 2733 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384997 kubelet[2733]: I0702 00:15:39.384724 2733 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8aef32bc-e690-43c4-b76e-9e03c5399342-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384997 kubelet[2733]: I0702 00:15:39.384738 2733 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384997 kubelet[2733]: I0702 00:15:39.384751 2733 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384997 kubelet[2733]: I0702 00:15:39.384763 2733 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2aa545f-861c-404e-9a40-8ebd336d2136-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384997 kubelet[2733]: I0702 00:15:39.384779 2733 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384997 kubelet[2733]: I0702 00:15:39.384798 2733 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384997 kubelet[2733]: I0702 00:15:39.384838 2733 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.384997 kubelet[2733]: I0702 00:15:39.384849 2733 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-czcjp\" (UniqueName: \"kubernetes.io/projected/a2aa545f-861c-404e-9a40-8ebd336d2136-kube-api-access-czcjp\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.385189 kubelet[2733]: I0702 00:15:39.384859 2733 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2aa545f-861c-404e-9a40-8ebd336d2136-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.385189 kubelet[2733]: I0702 00:15:39.384868 2733 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.385189 kubelet[2733]: I0702 00:15:39.384877 2733 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2aa545f-861c-404e-9a40-8ebd336d2136-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:15:39.871055 kubelet[2733]: I0702 00:15:39.870071 2733 scope.go:117] "RemoveContainer" containerID="6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308" Jul 2 00:15:39.872981 containerd[1581]: time="2024-07-02T00:15:39.872935528Z" level=info msg="RemoveContainer for \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\"" Jul 2 00:15:39.889519 containerd[1581]: time="2024-07-02T00:15:39.889295978Z" level=info msg="RemoveContainer for \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\" returns successfully" Jul 2 00:15:39.889947 kubelet[2733]: I0702 00:15:39.889660 2733 scope.go:117] "RemoveContainer" containerID="6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308" Jul 2 00:15:39.890300 containerd[1581]: time="2024-07-02T00:15:39.890237427Z" level=error msg="ContainerStatus for \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\": not found" Jul 2 00:15:39.908987 kubelet[2733]: E0702 00:15:39.908937 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\": not found" containerID="6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308" Jul 2 00:15:39.909322 kubelet[2733]: I0702 00:15:39.909290 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308"} err="failed to get container status \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b1150098c178240d14f80a2216cfc0c1ffad09d84e0255f4692c7dd5376c308\": not found" Jul 2 00:15:39.909322 kubelet[2733]: I0702 00:15:39.909320 2733 scope.go:117] "RemoveContainer" containerID="136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905" Jul 2 00:15:39.916444 containerd[1581]: time="2024-07-02T00:15:39.914727452Z" level=info msg="RemoveContainer for \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\"" Jul 2 00:15:39.928645 containerd[1581]: time="2024-07-02T00:15:39.928566667Z" level=info msg="RemoveContainer for \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\" returns successfully" Jul 2 00:15:39.930555 kubelet[2733]: I0702 00:15:39.929701 2733 scope.go:117] "RemoveContainer" containerID="2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e" Jul 2 00:15:39.931216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-378f478afb548e7ebb1e587b1010ce3cfa4a5b0442a7572575ef11a820a0adee-rootfs.mount: Deactivated successfully. Jul 2 00:15:39.931490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ce4a9849cd398174fd20f8cff6ebeb69acf2db4111f6b595c0639fbb2cd754d-rootfs.mount: Deactivated successfully. Jul 2 00:15:39.931657 systemd[1]: var-lib-kubelet-pods-8aef32bc\x2de690\x2d43c4\x2db76e\x2d9e03c5399342-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsh5z9.mount: Deactivated successfully. Jul 2 00:15:39.931862 systemd[1]: var-lib-kubelet-pods-a2aa545f\x2d861c\x2d404e\x2d9a40\x2d8ebd336d2136-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dczcjp.mount: Deactivated successfully. Jul 2 00:15:39.934109 systemd[1]: var-lib-kubelet-pods-a2aa545f\x2d861c\x2d404e\x2d9a40\x2d8ebd336d2136-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:15:39.934302 systemd[1]: var-lib-kubelet-pods-a2aa545f\x2d861c\x2d404e\x2d9a40\x2d8ebd336d2136-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:15:39.936090 containerd[1581]: time="2024-07-02T00:15:39.935308197Z" level=info msg="RemoveContainer for \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\"" Jul 2 00:15:39.946379 containerd[1581]: time="2024-07-02T00:15:39.946169434Z" level=info msg="RemoveContainer for \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\" returns successfully" Jul 2 00:15:39.946767 kubelet[2733]: I0702 00:15:39.946728 2733 scope.go:117] "RemoveContainer" containerID="403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b" Jul 2 00:15:39.955213 containerd[1581]: time="2024-07-02T00:15:39.954722258Z" level=info msg="RemoveContainer for \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\"" Jul 2 00:15:39.963595 containerd[1581]: time="2024-07-02T00:15:39.963454852Z" level=info msg="RemoveContainer for \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\" returns successfully" Jul 2 00:15:39.964849 kubelet[2733]: I0702 00:15:39.964055 2733 scope.go:117] "RemoveContainer" containerID="f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f" Jul 2 00:15:39.974866 containerd[1581]: time="2024-07-02T00:15:39.974245295Z" level=info msg="RemoveContainer for \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\"" Jul 2 00:15:39.981159 containerd[1581]: time="2024-07-02T00:15:39.981098787Z" level=info msg="RemoveContainer for \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\" returns successfully" Jul 2 00:15:39.981482 kubelet[2733]: I0702 00:15:39.981437 2733 scope.go:117] "RemoveContainer" containerID="b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36" Jul 2 00:15:39.982883 containerd[1581]: time="2024-07-02T00:15:39.982840228Z" level=info msg="RemoveContainer for \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\"" Jul 2 00:15:39.990419 containerd[1581]: time="2024-07-02T00:15:39.990356183Z" level=info msg="RemoveContainer for \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\" returns successfully" Jul 2 00:15:39.991107 kubelet[2733]: I0702 00:15:39.990677 2733 scope.go:117] "RemoveContainer" containerID="136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905" Jul 2 00:15:39.991176 containerd[1581]: time="2024-07-02T00:15:39.990989951Z" level=error msg="ContainerStatus for \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\": not found" Jul 2 00:15:39.991239 kubelet[2733]: E0702 00:15:39.991161 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\": not found" containerID="136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905" Jul 2 00:15:39.991239 kubelet[2733]: I0702 00:15:39.991219 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905"} err="failed to get container status \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\": rpc error: code = NotFound desc = an error occurred when try to find container \"136201785d47a1fc31dda28cc3961aadc3a9f7965869a859db677aeadbf46905\": not found" Jul 2 00:15:39.991239 kubelet[2733]: I0702 00:15:39.991231 2733 scope.go:117] "RemoveContainer" containerID="2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e" Jul 2 00:15:39.991701 containerd[1581]: time="2024-07-02T00:15:39.991624861Z" level=error msg="ContainerStatus for \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\": not found" Jul 2 00:15:39.991958 kubelet[2733]: E0702 00:15:39.991892 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\": not found" containerID="2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e" Jul 2 00:15:39.991958 kubelet[2733]: I0702 00:15:39.991922 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e"} err="failed to get container status \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\": rpc error: code = NotFound desc = an error occurred when try to find container \"2145d46b7b1c96e92bbcccf7e021a00b17c86ed7992a22fbeec3bab99232f07e\": not found" Jul 2 00:15:39.991958 kubelet[2733]: I0702 00:15:39.991933 2733 scope.go:117] "RemoveContainer" containerID="403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b" Jul 2 00:15:39.992101 containerd[1581]: time="2024-07-02T00:15:39.992067858Z" level=error msg="ContainerStatus for \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\": not found" Jul 2 00:15:39.992247 kubelet[2733]: E0702 00:15:39.992163 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\": not found" containerID="403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b" Jul 2 00:15:39.992247 kubelet[2733]: I0702 00:15:39.992193 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b"} err="failed to get container status \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\": rpc error: code = NotFound desc = an error occurred when try to find container \"403b25a27bf97bf2e86471c7cb502f4dfb080cd9b31a1930e9ce8b4af06d170b\": not found" Jul 2 00:15:39.992247 kubelet[2733]: I0702 00:15:39.992203 2733 scope.go:117] "RemoveContainer" containerID="f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f" Jul 2 00:15:39.992548 containerd[1581]: time="2024-07-02T00:15:39.992329291Z" level=error msg="ContainerStatus for \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\": not found" Jul 2 00:15:39.994743 kubelet[2733]: E0702 00:15:39.994694 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\": not found" containerID="f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f" Jul 2 00:15:39.994743 kubelet[2733]: I0702 00:15:39.994725 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f"} err="failed to get container status \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6a516e3574670552d2e4f2c0e98a7d85e031bb853a14363149f1e29f8d03f2f\": not found" Jul 2 00:15:39.994743 kubelet[2733]: I0702 00:15:39.994734 2733 scope.go:117] "RemoveContainer" containerID="b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36" Jul 2 00:15:40.005835 containerd[1581]: time="2024-07-02T00:15:40.005689680Z" level=error msg="ContainerStatus for \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\": not found" Jul 2 00:15:40.007695 kubelet[2733]: E0702 00:15:40.007609 2733 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\": not found" containerID="b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36" Jul 2 00:15:40.007772 kubelet[2733]: I0702 00:15:40.007759 2733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36"} err="failed to get container status \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\": rpc error: code = NotFound desc = an error occurred when try to find container \"b378a4fa9322f12b629e1160aea6a1ad2001baa5bb118dfeeb791a7d8f9f3c36\": not found" Jul 2 00:15:40.325216 kubelet[2733]: I0702 00:15:40.325166 2733 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8aef32bc-e690-43c4-b76e-9e03c5399342" path="/var/lib/kubelet/pods/8aef32bc-e690-43c4-b76e-9e03c5399342/volumes" Jul 2 00:15:40.325847 kubelet[2733]: I0702 00:15:40.325787 2733 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a2aa545f-861c-404e-9a40-8ebd336d2136" path="/var/lib/kubelet/pods/a2aa545f-861c-404e-9a40-8ebd336d2136/volumes" Jul 2 00:15:40.448131 sshd[4392]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:40.456138 systemd[1]: Started sshd@26-10.0.0.45:22-10.0.0.1:48518.service - OpenSSH per-connection server daemon (10.0.0.1:48518). Jul 2 00:15:40.456713 systemd[1]: sshd@25-10.0.0.45:22-10.0.0.1:35654.service: Deactivated successfully. Jul 2 00:15:40.459068 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:15:40.461192 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:15:40.462239 systemd-logind[1559]: Removed session 26. Jul 2 00:15:40.497219 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 48518 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:40.499313 sshd[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:40.505104 systemd-logind[1559]: New session 27 of user core. Jul 2 00:15:40.517381 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:15:41.140118 sshd[4558]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:41.155594 kubelet[2733]: I0702 00:15:41.154087 2733 topology_manager.go:215] "Topology Admit Handler" podUID="fef72ac0-c3a8-4c47-a976-eebb35f5ceff" podNamespace="kube-system" podName="cilium-qljrn" Jul 2 00:15:41.156891 kubelet[2733]: E0702 00:15:41.156276 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2aa545f-861c-404e-9a40-8ebd336d2136" containerName="mount-cgroup" Jul 2 00:15:41.156891 kubelet[2733]: E0702 00:15:41.156311 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8aef32bc-e690-43c4-b76e-9e03c5399342" containerName="cilium-operator" Jul 2 00:15:41.156891 kubelet[2733]: E0702 00:15:41.156324 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2aa545f-861c-404e-9a40-8ebd336d2136" containerName="cilium-agent" Jul 2 00:15:41.156891 kubelet[2733]: E0702 00:15:41.156346 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2aa545f-861c-404e-9a40-8ebd336d2136" containerName="apply-sysctl-overwrites" Jul 2 00:15:41.156891 kubelet[2733]: E0702 00:15:41.156355 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2aa545f-861c-404e-9a40-8ebd336d2136" containerName="mount-bpf-fs" Jul 2 00:15:41.156891 kubelet[2733]: E0702 00:15:41.156364 2733 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2aa545f-861c-404e-9a40-8ebd336d2136" containerName="clean-cilium-state" Jul 2 00:15:41.156891 kubelet[2733]: I0702 00:15:41.156398 2733 memory_manager.go:346] "RemoveStaleState removing state" podUID="8aef32bc-e690-43c4-b76e-9e03c5399342" containerName="cilium-operator" Jul 2 00:15:41.156891 kubelet[2733]: I0702 00:15:41.156408 2733 memory_manager.go:346] "RemoveStaleState removing state" podUID="a2aa545f-861c-404e-9a40-8ebd336d2136" containerName="cilium-agent" Jul 2 00:15:41.161251 systemd[1]: Started sshd@27-10.0.0.45:22-10.0.0.1:48520.service - OpenSSH per-connection server daemon (10.0.0.1:48520). Jul 2 00:15:41.162020 systemd[1]: sshd@26-10.0.0.45:22-10.0.0.1:48518.service: Deactivated successfully. Jul 2 00:15:41.166217 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:15:41.172403 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:15:41.179506 systemd-logind[1559]: Removed session 27. Jul 2 00:15:41.196381 kubelet[2733]: I0702 00:15:41.196340 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-bpf-maps\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.196732 kubelet[2733]: I0702 00:15:41.196612 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-host-proc-sys-net\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.196732 kubelet[2733]: I0702 00:15:41.196636 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-host-proc-sys-kernel\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.196732 kubelet[2733]: I0702 00:15:41.196689 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-cilium-ipsec-secrets\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.196732 kubelet[2733]: I0702 00:15:41.196707 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-cilium-run\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197017 kubelet[2733]: I0702 00:15:41.196888 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-clustermesh-secrets\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197017 kubelet[2733]: I0702 00:15:41.196910 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-lib-modules\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197239 kubelet[2733]: I0702 00:15:41.197118 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-cilium-cgroup\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197239 kubelet[2733]: I0702 00:15:41.197148 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-etc-cni-netd\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197239 kubelet[2733]: I0702 00:15:41.197193 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-cilium-config-path\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197239 kubelet[2733]: I0702 00:15:41.197210 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-hubble-tls\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197512 kubelet[2733]: I0702 00:15:41.197226 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtp7\" (UniqueName: \"kubernetes.io/projected/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-kube-api-access-mbtp7\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197512 kubelet[2733]: I0702 00:15:41.197460 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-hostproc\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197512 kubelet[2733]: I0702 00:15:41.197479 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-cni-path\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.197512 kubelet[2733]: I0702 00:15:41.197496 2733 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fef72ac0-c3a8-4c47-a976-eebb35f5ceff-xtables-lock\") pod \"cilium-qljrn\" (UID: \"fef72ac0-c3a8-4c47-a976-eebb35f5ceff\") " pod="kube-system/cilium-qljrn" Jul 2 00:15:41.214684 sshd[4571]: Accepted publickey for core from 10.0.0.1 port 48520 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:41.215921 sshd[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:41.220992 systemd-logind[1559]: New session 28 of user core. Jul 2 00:15:41.232225 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:15:41.285737 sshd[4571]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:41.296090 systemd[1]: Started sshd@28-10.0.0.45:22-10.0.0.1:48536.service - OpenSSH per-connection server daemon (10.0.0.1:48536). Jul 2 00:15:41.296563 systemd[1]: sshd@27-10.0.0.45:22-10.0.0.1:48520.service: Deactivated successfully. Jul 2 00:15:41.301243 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:15:41.301528 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:15:41.320779 systemd-logind[1559]: Removed session 28. Jul 2 00:15:41.333117 sshd[4582]: Accepted publickey for core from 10.0.0.1 port 48536 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:15:41.335075 sshd[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:15:41.339914 systemd-logind[1559]: New session 29 of user core. Jul 2 00:15:41.348161 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:15:41.478164 kubelet[2733]: E0702 00:15:41.478097 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:41.478723 containerd[1581]: time="2024-07-02T00:15:41.478650079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qljrn,Uid:fef72ac0-c3a8-4c47-a976-eebb35f5ceff,Namespace:kube-system,Attempt:0,}" Jul 2 00:15:41.683735 containerd[1581]: time="2024-07-02T00:15:41.683651863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:15:41.683735 containerd[1581]: time="2024-07-02T00:15:41.683712206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:15:41.683906 containerd[1581]: time="2024-07-02T00:15:41.683744288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:15:41.683906 containerd[1581]: time="2024-07-02T00:15:41.683766139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:15:41.719761 containerd[1581]: time="2024-07-02T00:15:41.719707246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qljrn,Uid:fef72ac0-c3a8-4c47-a976-eebb35f5ceff,Namespace:kube-system,Attempt:0,} returns sandbox id \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\"" Jul 2 00:15:41.720361 kubelet[2733]: E0702 00:15:41.720342 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:41.722182 containerd[1581]: time="2024-07-02T00:15:41.722143890Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:15:41.793965 containerd[1581]: time="2024-07-02T00:15:41.793800048Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c79b81c1a4990bb0902e04f4a4d1c16671f45bde035a04907705d8989f945776\"" Jul 2 00:15:41.794411 containerd[1581]: time="2024-07-02T00:15:41.794385324Z" level=info msg="StartContainer for \"c79b81c1a4990bb0902e04f4a4d1c16671f45bde035a04907705d8989f945776\"" Jul 2 00:15:41.847712 containerd[1581]: time="2024-07-02T00:15:41.847649993Z" level=info msg="StartContainer for \"c79b81c1a4990bb0902e04f4a4d1c16671f45bde035a04907705d8989f945776\" returns successfully" Jul 2 00:15:41.895931 kubelet[2733]: E0702 00:15:41.894379 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:41.902680 containerd[1581]: time="2024-07-02T00:15:41.902614203Z" level=info msg="shim disconnected" id=c79b81c1a4990bb0902e04f4a4d1c16671f45bde035a04907705d8989f945776 namespace=k8s.io Jul 2 00:15:41.902680 containerd[1581]: time="2024-07-02T00:15:41.902677172Z" level=warning msg="cleaning up after shim disconnected" id=c79b81c1a4990bb0902e04f4a4d1c16671f45bde035a04907705d8989f945776 namespace=k8s.io Jul 2 00:15:41.902680 containerd[1581]: time="2024-07-02T00:15:41.902686880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:15:42.889800 kubelet[2733]: E0702 00:15:42.889763 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:42.891773 containerd[1581]: time="2024-07-02T00:15:42.891738098Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:15:42.915214 containerd[1581]: time="2024-07-02T00:15:42.915144558Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"392d930fa65c9b81fc09ace24363fd2d6da7ac542bc59af2f0f9b32c70660ef9\"" Jul 2 00:15:42.915936 containerd[1581]: time="2024-07-02T00:15:42.915890506Z" level=info msg="StartContainer for \"392d930fa65c9b81fc09ace24363fd2d6da7ac542bc59af2f0f9b32c70660ef9\"" Jul 2 00:15:42.975591 containerd[1581]: time="2024-07-02T00:15:42.975421742Z" level=info msg="StartContainer for \"392d930fa65c9b81fc09ace24363fd2d6da7ac542bc59af2f0f9b32c70660ef9\" returns successfully" Jul 2 00:15:43.010344 containerd[1581]: time="2024-07-02T00:15:43.010260457Z" level=info msg="shim disconnected" id=392d930fa65c9b81fc09ace24363fd2d6da7ac542bc59af2f0f9b32c70660ef9 namespace=k8s.io Jul 2 00:15:43.010344 containerd[1581]: time="2024-07-02T00:15:43.010320651Z" level=warning msg="cleaning up after shim disconnected" id=392d930fa65c9b81fc09ace24363fd2d6da7ac542bc59af2f0f9b32c70660ef9 namespace=k8s.io Jul 2 00:15:43.010344 containerd[1581]: time="2024-07-02T00:15:43.010329668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:15:43.308728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-392d930fa65c9b81fc09ace24363fd2d6da7ac542bc59af2f0f9b32c70660ef9-rootfs.mount: Deactivated successfully. Jul 2 00:15:43.322133 kubelet[2733]: E0702 00:15:43.322061 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:43.519483 kubelet[2733]: E0702 00:15:43.519392 2733 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:15:43.893906 kubelet[2733]: E0702 00:15:43.893876 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:43.896150 containerd[1581]: time="2024-07-02T00:15:43.896049352Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:15:43.981179 containerd[1581]: time="2024-07-02T00:15:43.981099256Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e00e2b07f97b7c03f1050c006018bfce93174a4e190bc6abe3e885273220e293\"" Jul 2 00:15:43.981773 containerd[1581]: time="2024-07-02T00:15:43.981743804Z" level=info msg="StartContainer for \"e00e2b07f97b7c03f1050c006018bfce93174a4e190bc6abe3e885273220e293\"" Jul 2 00:15:44.055626 containerd[1581]: time="2024-07-02T00:15:44.055563845Z" level=info msg="StartContainer for \"e00e2b07f97b7c03f1050c006018bfce93174a4e190bc6abe3e885273220e293\" returns successfully" Jul 2 00:15:44.202880 containerd[1581]: time="2024-07-02T00:15:44.202758687Z" level=info msg="shim disconnected" id=e00e2b07f97b7c03f1050c006018bfce93174a4e190bc6abe3e885273220e293 namespace=k8s.io Jul 2 00:15:44.202880 containerd[1581]: time="2024-07-02T00:15:44.202865399Z" level=warning msg="cleaning up after shim disconnected" id=e00e2b07f97b7c03f1050c006018bfce93174a4e190bc6abe3e885273220e293 namespace=k8s.io Jul 2 00:15:44.202880 containerd[1581]: time="2024-07-02T00:15:44.202878213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:15:44.309138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e00e2b07f97b7c03f1050c006018bfce93174a4e190bc6abe3e885273220e293-rootfs.mount: Deactivated successfully. Jul 2 00:15:44.897151 kubelet[2733]: E0702 00:15:44.897108 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:44.899596 containerd[1581]: time="2024-07-02T00:15:44.899563095Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:15:44.956565 containerd[1581]: time="2024-07-02T00:15:44.956494884Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6df1d62681a700ba6fb49459aeb9968471fe273e98a1a72ea7ab9649728e7bd3\"" Jul 2 00:15:44.958459 containerd[1581]: time="2024-07-02T00:15:44.958412737Z" level=info msg="StartContainer for \"6df1d62681a700ba6fb49459aeb9968471fe273e98a1a72ea7ab9649728e7bd3\"" Jul 2 00:15:45.050885 containerd[1581]: time="2024-07-02T00:15:45.050793128Z" level=info msg="StartContainer for \"6df1d62681a700ba6fb49459aeb9968471fe273e98a1a72ea7ab9649728e7bd3\" returns successfully" Jul 2 00:15:45.075300 containerd[1581]: time="2024-07-02T00:15:45.075230949Z" level=info msg="shim disconnected" id=6df1d62681a700ba6fb49459aeb9968471fe273e98a1a72ea7ab9649728e7bd3 namespace=k8s.io Jul 2 00:15:45.075300 containerd[1581]: time="2024-07-02T00:15:45.075296193Z" level=warning msg="cleaning up after shim disconnected" id=6df1d62681a700ba6fb49459aeb9968471fe273e98a1a72ea7ab9649728e7bd3 namespace=k8s.io Jul 2 00:15:45.075300 containerd[1581]: time="2024-07-02T00:15:45.075307534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:15:45.308664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6df1d62681a700ba6fb49459aeb9968471fe273e98a1a72ea7ab9649728e7bd3-rootfs.mount: Deactivated successfully. Jul 2 00:15:45.899897 kubelet[2733]: E0702 00:15:45.899871 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:45.902878 containerd[1581]: time="2024-07-02T00:15:45.901897297Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:15:45.961510 containerd[1581]: time="2024-07-02T00:15:45.961450385Z" level=info msg="CreateContainer within sandbox \"de6b269119aedd393adc1983acefb22714ac417caeef435f98c84269f2547fee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"57a711e07a63eb07849006690815d73904d0f7f8df56b1e2114089c4276ac8ee\"" Jul 2 00:15:45.963268 containerd[1581]: time="2024-07-02T00:15:45.962105722Z" level=info msg="StartContainer for \"57a711e07a63eb07849006690815d73904d0f7f8df56b1e2114089c4276ac8ee\"" Jul 2 00:15:46.051843 containerd[1581]: time="2024-07-02T00:15:46.051760657Z" level=info msg="StartContainer for \"57a711e07a63eb07849006690815d73904d0f7f8df56b1e2114089c4276ac8ee\" returns successfully" Jul 2 00:15:46.524844 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 00:15:46.904492 kubelet[2733]: E0702 00:15:46.904387 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:46.928945 kubelet[2733]: I0702 00:15:46.928875 2733 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qljrn" podStartSLOduration=5.92883952 podCreationTimestamp="2024-07-02 00:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:15:46.928362791 +0000 UTC m=+98.691721178" watchObservedRunningTime="2024-07-02 00:15:46.92883952 +0000 UTC m=+98.692197897" Jul 2 00:15:47.906304 kubelet[2733]: E0702 00:15:47.906222 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:49.765129 systemd-networkd[1245]: lxc_health: Link UP Jul 2 00:15:49.772941 systemd-networkd[1245]: lxc_health: Gained carrier Jul 2 00:15:51.481266 kubelet[2733]: E0702 00:15:51.481223 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:51.565113 systemd-networkd[1245]: lxc_health: Gained IPv6LL Jul 2 00:15:51.914193 kubelet[2733]: E0702 00:15:51.914171 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:52.217698 systemd[1]: run-containerd-runc-k8s.io-57a711e07a63eb07849006690815d73904d0f7f8df56b1e2114089c4276ac8ee-runc.FvXGsM.mount: Deactivated successfully. Jul 2 00:15:53.322380 kubelet[2733]: E0702 00:15:53.322319 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:55.322048 kubelet[2733]: E0702 00:15:55.322010 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:15:56.522312 sshd[4582]: pam_unix(sshd:session): session closed for user core Jul 2 00:15:56.526336 systemd[1]: sshd@28-10.0.0.45:22-10.0.0.1:48536.service: Deactivated successfully. Jul 2 00:15:56.528735 systemd-logind[1559]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:15:56.528887 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:15:56.529802 systemd-logind[1559]: Removed session 29.