Jul 2 00:16:28.193552 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:16:28.193581 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:16:28.193596 kernel: BIOS-provided physical RAM map: Jul 2 00:16:28.193604 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:16:28.193612 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:16:28.193620 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:16:28.193630 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jul 2 00:16:28.193639 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jul 2 00:16:28.193647 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:16:28.193658 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:16:28.193667 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 2 00:16:28.193675 kernel: NX (Execute Disable) protection: active Jul 2 00:16:28.193683 kernel: APIC: Static calls initialized Jul 2 00:16:28.193692 kernel: SMBIOS 2.8 present. Jul 2 00:16:28.193702 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 2 00:16:28.193714 kernel: Hypervisor detected: KVM Jul 2 00:16:28.193724 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:16:28.193733 kernel: kvm-clock: using sched offset of 2245309801 cycles Jul 2 00:16:28.193742 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:16:28.193752 kernel: tsc: Detected 2794.746 MHz processor Jul 2 00:16:28.193762 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:16:28.193772 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:16:28.193781 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jul 2 00:16:28.193791 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:16:28.193804 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:16:28.193814 kernel: Using GB pages for direct mapping Jul 2 00:16:28.193824 kernel: ACPI: Early table checksum verification disabled Jul 2 00:16:28.193833 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jul 2 00:16:28.193843 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:16:28.193853 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:16:28.193862 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:16:28.193871 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 2 00:16:28.193880 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:16:28.193893 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:16:28.193903 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:16:28.193912 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jul 2 00:16:28.193921 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jul 2 00:16:28.193931 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 2 00:16:28.193940 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jul 2 00:16:28.193951 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jul 2 00:16:28.193965 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jul 2 00:16:28.193977 kernel: No NUMA configuration found Jul 2 00:16:28.193987 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jul 2 00:16:28.193998 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jul 2 00:16:28.194008 kernel: Zone ranges: Jul 2 00:16:28.194018 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:16:28.194028 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jul 2 00:16:28.194041 kernel: Normal empty Jul 2 00:16:28.194054 kernel: Movable zone start for each node Jul 2 00:16:28.194065 kernel: Early memory node ranges Jul 2 00:16:28.194076 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:16:28.194100 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jul 2 00:16:28.194111 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jul 2 00:16:28.194122 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:16:28.194132 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:16:28.194142 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jul 2 00:16:28.194152 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:16:28.194167 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:16:28.194177 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:16:28.194187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:16:28.194198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:16:28.194208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:16:28.194218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:16:28.194228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:16:28.194238 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:16:28.194248 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:16:28.194261 kernel: TSC deadline timer available Jul 2 00:16:28.194272 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 00:16:28.194282 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:16:28.194305 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 00:16:28.194316 kernel: kvm-guest: setup PV sched yield Jul 2 00:16:28.194326 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jul 2 00:16:28.194336 kernel: Booting paravirtualized kernel on KVM Jul 2 00:16:28.194347 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:16:28.194357 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 00:16:28.194371 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jul 2 00:16:28.194381 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jul 2 00:16:28.194392 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 00:16:28.194401 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:16:28.194412 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:16:28.194423 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:16:28.194434 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:16:28.194444 kernel: random: crng init done Jul 2 00:16:28.194458 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:16:28.194468 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:16:28.194478 kernel: Fallback order for Node 0: 0 Jul 2 00:16:28.194488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jul 2 00:16:28.194498 kernel: Policy zone: DMA32 Jul 2 00:16:28.194508 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:16:28.194518 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 143044K reserved, 0K cma-reserved) Jul 2 00:16:28.194529 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:16:28.194540 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:16:28.194554 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:16:28.194564 kernel: Dynamic Preempt: voluntary Jul 2 00:16:28.194575 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:16:28.194587 kernel: rcu: RCU event tracing is enabled. Jul 2 00:16:28.194598 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:16:28.194609 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:16:28.194620 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:16:28.194631 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:16:28.194642 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:16:28.194656 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:16:28.194666 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 00:16:28.194677 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:16:28.194688 kernel: Console: colour VGA+ 80x25 Jul 2 00:16:28.194698 kernel: printk: console [ttyS0] enabled Jul 2 00:16:28.194709 kernel: ACPI: Core revision 20230628 Jul 2 00:16:28.194720 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:16:28.194731 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:16:28.194742 kernel: x2apic enabled Jul 2 00:16:28.194753 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:16:28.194766 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 2 00:16:28.194777 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 2 00:16:28.194788 kernel: kvm-guest: setup PV IPIs Jul 2 00:16:28.194799 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:16:28.194810 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:16:28.194821 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 2 00:16:28.194832 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 00:16:28.194856 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 00:16:28.194867 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 00:16:28.194879 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:16:28.194892 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:16:28.194910 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:16:28.194924 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:16:28.194938 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 00:16:28.194952 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 00:16:28.194967 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:16:28.194984 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:16:28.194998 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 2 00:16:28.195010 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 2 00:16:28.195022 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 2 00:16:28.195033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:16:28.195044 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:16:28.195055 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:16:28.195067 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:16:28.195090 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 00:16:28.195102 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:16:28.195113 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:16:28.195125 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:16:28.195136 kernel: SELinux: Initializing. Jul 2 00:16:28.195147 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:16:28.195158 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:16:28.195170 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 00:16:28.195182 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:16:28.195196 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:16:28.195207 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:16:28.195219 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 00:16:28.195230 kernel: ... version: 0 Jul 2 00:16:28.195241 kernel: ... bit width: 48 Jul 2 00:16:28.195252 kernel: ... generic registers: 6 Jul 2 00:16:28.195263 kernel: ... value mask: 0000ffffffffffff Jul 2 00:16:28.195274 kernel: ... max period: 00007fffffffffff Jul 2 00:16:28.195286 kernel: ... fixed-purpose events: 0 Jul 2 00:16:28.195313 kernel: ... event mask: 000000000000003f Jul 2 00:16:28.195324 kernel: signal: max sigframe size: 1776 Jul 2 00:16:28.195336 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:16:28.195348 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:16:28.195359 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:16:28.195370 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:16:28.195382 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 00:16:28.195393 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:16:28.195404 kernel: smpboot: Max logical packages: 1 Jul 2 00:16:28.195418 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 2 00:16:28.195430 kernel: devtmpfs: initialized Jul 2 00:16:28.195441 kernel: x86/mm: Memory block size: 128MB Jul 2 00:16:28.195453 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:16:28.195464 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:16:28.195476 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:16:28.195487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:16:28.195498 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:16:28.195510 kernel: audit: type=2000 audit(1719879386.510:1): state=initialized audit_enabled=0 res=1 Jul 2 00:16:28.195523 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:16:28.195535 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:16:28.195546 kernel: cpuidle: using governor menu Jul 2 00:16:28.195558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:16:28.195569 kernel: dca service started, version 1.12.1 Jul 2 00:16:28.195580 kernel: PCI: Using configuration type 1 for base access Jul 2 00:16:28.195592 kernel: PCI: Using configuration type 1 for extended access Jul 2 00:16:28.195604 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:16:28.195615 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:16:28.195629 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:16:28.195641 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:16:28.195652 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:16:28.195663 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:16:28.195674 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:16:28.195686 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:16:28.195697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:16:28.195708 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:16:28.195720 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:16:28.195734 kernel: ACPI: Interpreter enabled Jul 2 00:16:28.195745 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:16:28.195757 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:16:28.195768 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:16:28.195779 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:16:28.195790 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:16:28.195801 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:16:28.196044 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:16:28.196070 kernel: acpiphp: Slot [3] registered Jul 2 00:16:28.196096 kernel: acpiphp: Slot [4] registered Jul 2 00:16:28.196110 kernel: acpiphp: Slot [5] registered Jul 2 00:16:28.196125 kernel: acpiphp: Slot [6] registered Jul 2 00:16:28.196139 kernel: acpiphp: Slot [7] registered Jul 2 00:16:28.196153 kernel: acpiphp: Slot [8] registered Jul 2 00:16:28.196165 kernel: acpiphp: Slot [9] registered Jul 2 00:16:28.196177 kernel: acpiphp: Slot [10] registered Jul 2 00:16:28.196189 kernel: acpiphp: Slot [11] registered Jul 2 00:16:28.196200 kernel: acpiphp: Slot [12] registered Jul 2 00:16:28.196214 kernel: acpiphp: Slot [13] registered Jul 2 00:16:28.196226 kernel: acpiphp: Slot [14] registered Jul 2 00:16:28.196236 kernel: acpiphp: Slot [15] registered Jul 2 00:16:28.196248 kernel: acpiphp: Slot [16] registered Jul 2 00:16:28.196259 kernel: acpiphp: Slot [17] registered Jul 2 00:16:28.196270 kernel: acpiphp: Slot [18] registered Jul 2 00:16:28.196281 kernel: acpiphp: Slot [19] registered Jul 2 00:16:28.196306 kernel: acpiphp: Slot [20] registered Jul 2 00:16:28.196318 kernel: acpiphp: Slot [21] registered Jul 2 00:16:28.196333 kernel: acpiphp: Slot [22] registered Jul 2 00:16:28.196344 kernel: acpiphp: Slot [23] registered Jul 2 00:16:28.196355 kernel: acpiphp: Slot [24] registered Jul 2 00:16:28.196366 kernel: acpiphp: Slot [25] registered Jul 2 00:16:28.196377 kernel: acpiphp: Slot [26] registered Jul 2 00:16:28.196388 kernel: acpiphp: Slot [27] registered Jul 2 00:16:28.196399 kernel: acpiphp: Slot [28] registered Jul 2 00:16:28.196411 kernel: acpiphp: Slot [29] registered Jul 2 00:16:28.196422 kernel: acpiphp: Slot [30] registered Jul 2 00:16:28.196433 kernel: acpiphp: Slot [31] registered Jul 2 00:16:28.196447 kernel: PCI host bridge to bus 0000:00 Jul 2 00:16:28.196618 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:16:28.196764 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:16:28.196908 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:16:28.197051 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 00:16:28.197207 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:16:28.197369 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:16:28.197552 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:16:28.197728 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:16:28.197897 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:16:28.198058 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 00:16:28.198226 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:16:28.198401 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:16:28.198565 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:16:28.198722 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:16:28.198888 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:16:28.199070 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:16:28.199261 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:16:28.199478 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 00:16:28.199640 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 2 00:16:28.199805 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 2 00:16:28.199965 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 2 00:16:28.200132 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:16:28.200308 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:16:28.200457 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 00:16:28.200600 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 2 00:16:28.200749 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 2 00:16:28.200910 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:16:28.201072 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:16:28.201224 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 2 00:16:28.201383 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 2 00:16:28.201535 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:16:28.201676 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 00:16:28.201821 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 2 00:16:28.201968 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 2 00:16:28.202163 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 2 00:16:28.202180 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:16:28.202192 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:16:28.202204 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:16:28.202216 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:16:28.202227 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:16:28.202239 kernel: iommu: Default domain type: Translated Jul 2 00:16:28.202255 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:16:28.202267 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:16:28.202279 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:16:28.202290 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:16:28.202349 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jul 2 00:16:28.202511 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:16:28.202673 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:16:28.202838 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:16:28.202859 kernel: vgaarb: loaded Jul 2 00:16:28.202871 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:16:28.202883 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:16:28.202895 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:16:28.202907 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:16:28.202919 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:16:28.202930 kernel: pnp: PnP ACPI init Jul 2 00:16:28.203139 kernel: pnp 00:02: [dma 2] Jul 2 00:16:28.203157 kernel: pnp: PnP ACPI: found 6 devices Jul 2 00:16:28.203174 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:16:28.203185 kernel: NET: Registered PF_INET protocol family Jul 2 00:16:28.203197 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:16:28.203209 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:16:28.203220 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:16:28.203232 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:16:28.203243 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:16:28.203255 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:16:28.203269 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:16:28.203281 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:16:28.203306 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:16:28.203318 kernel: NET: Registered PF_XDP protocol family Jul 2 00:16:28.203468 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:16:28.203611 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:16:28.203753 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:16:28.203898 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 00:16:28.204062 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:16:28.204239 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:16:28.204422 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:16:28.204439 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:16:28.204451 kernel: Initialise system trusted keyrings Jul 2 00:16:28.204463 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:16:28.204474 kernel: Key type asymmetric registered Jul 2 00:16:28.204485 kernel: Asymmetric key parser 'x509' registered Jul 2 00:16:28.204497 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:16:28.204513 kernel: io scheduler mq-deadline registered Jul 2 00:16:28.204525 kernel: io scheduler kyber registered Jul 2 00:16:28.204536 kernel: io scheduler bfq registered Jul 2 00:16:28.204546 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:16:28.204559 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:16:28.204570 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 00:16:28.204582 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:16:28.204593 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:16:28.204605 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:16:28.204620 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:16:28.204631 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:16:28.204643 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:16:28.204804 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 00:16:28.204952 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 00:16:28.205148 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T00:16:27 UTC (1719879387) Jul 2 00:16:28.205316 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 00:16:28.205333 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:16:28.205350 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:16:28.205361 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:16:28.205372 kernel: Segment Routing with IPv6 Jul 2 00:16:28.205382 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:16:28.205393 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:16:28.205403 kernel: Key type dns_resolver registered Jul 2 00:16:28.205414 kernel: IPI shorthand broadcast: enabled Jul 2 00:16:28.205424 kernel: sched_clock: Marking stable (921002503, 107605562)->(1120775003, -92166938) Jul 2 00:16:28.205435 kernel: registered taskstats version 1 Jul 2 00:16:28.205449 kernel: Loading compiled-in X.509 certificates Jul 2 00:16:28.205460 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:16:28.205470 kernel: Key type .fscrypt registered Jul 2 00:16:28.205480 kernel: Key type fscrypt-provisioning registered Jul 2 00:16:28.205490 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:16:28.205501 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:16:28.205512 kernel: ima: No architecture policies found Jul 2 00:16:28.205523 kernel: clk: Disabling unused clocks Jul 2 00:16:28.205537 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:16:28.205553 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:16:28.205564 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:16:28.205575 kernel: Run /init as init process Jul 2 00:16:28.205586 kernel: with arguments: Jul 2 00:16:28.205597 kernel: /init Jul 2 00:16:28.205608 kernel: with environment: Jul 2 00:16:28.205620 kernel: HOME=/ Jul 2 00:16:28.205651 kernel: TERM=linux Jul 2 00:16:28.205666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:16:28.205684 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:16:28.205699 systemd[1]: Detected virtualization kvm. Jul 2 00:16:28.205711 systemd[1]: Detected architecture x86-64. Jul 2 00:16:28.205723 systemd[1]: Running in initrd. Jul 2 00:16:28.205735 systemd[1]: No hostname configured, using default hostname. Jul 2 00:16:28.205748 systemd[1]: Hostname set to . Jul 2 00:16:28.205763 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:16:28.205775 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:16:28.205788 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:16:28.205800 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:16:28.205813 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:16:28.205827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:16:28.205839 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:16:28.205852 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:16:28.205870 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:16:28.205883 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:16:28.205895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:16:28.205908 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:16:28.205920 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:16:28.205933 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:16:28.205945 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:16:28.205957 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:16:28.205973 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:16:28.205985 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:16:28.205998 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:16:28.206010 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:16:28.206023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:16:28.206036 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:16:28.206048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:16:28.206061 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:16:28.206077 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:16:28.206099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:16:28.206111 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:16:28.206124 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:16:28.206136 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:16:28.206149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:16:28.206192 systemd-journald[193]: Collecting audit messages is disabled. Jul 2 00:16:28.206221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:16:28.206234 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:16:28.206250 systemd-journald[193]: Journal started Jul 2 00:16:28.206275 systemd-journald[193]: Runtime Journal (/run/log/journal/b9d90d56e547495fb127b674b1cc858c) is 6.0M, max 48.4M, 42.3M free. Jul 2 00:16:28.208730 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:16:28.209316 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:16:28.215536 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:16:28.250914 systemd-modules-load[194]: Inserted module 'overlay' Jul 2 00:16:28.271558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:16:28.310393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:16:28.313857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:16:28.322113 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:16:28.329774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:16:28.340674 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:16:28.357064 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:16:28.380552 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:16:28.383730 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:16:28.406634 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:16:28.408067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:16:28.416408 dracut-cmdline[219]: dracut-dracut-053 Jul 2 00:16:28.416408 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:16:28.431121 kernel: Bridge firewalling registered Jul 2 00:16:28.433264 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 2 00:16:28.434984 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:16:28.447498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:16:28.464062 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:16:28.473509 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:16:28.532109 systemd-resolved[258]: Positive Trust Anchors: Jul 2 00:16:28.538135 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:16:28.539908 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:16:28.545417 systemd-resolved[258]: Defaulting to hostname 'linux'. Jul 2 00:16:28.546807 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:16:28.566754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:16:28.655350 kernel: SCSI subsystem initialized Jul 2 00:16:28.673683 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:16:28.725876 kernel: iscsi: registered transport (tcp) Jul 2 00:16:28.776113 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:16:28.776186 kernel: QLogic iSCSI HBA Driver Jul 2 00:16:28.845100 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:16:28.853468 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:16:28.883769 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:16:28.883852 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:16:28.884917 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:16:28.932355 kernel: raid6: avx2x4 gen() 29091 MB/s Jul 2 00:16:28.949337 kernel: raid6: avx2x2 gen() 29110 MB/s Jul 2 00:16:28.966661 kernel: raid6: avx2x1 gen() 23333 MB/s Jul 2 00:16:28.966701 kernel: raid6: using algorithm avx2x2 gen() 29110 MB/s Jul 2 00:16:28.984525 kernel: raid6: .... xor() 18177 MB/s, rmw enabled Jul 2 00:16:28.984617 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:16:29.011344 kernel: xor: automatically using best checksumming function avx Jul 2 00:16:29.192343 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:16:29.206585 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:16:29.220533 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:16:29.233074 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 2 00:16:29.237200 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:16:29.238848 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:16:29.258541 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jul 2 00:16:29.293076 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:16:29.322579 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:16:29.399612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:16:29.409513 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:16:29.431155 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:16:29.456222 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 00:16:29.504351 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:16:29.504513 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:16:29.504524 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:16:29.504536 kernel: GPT:9289727 != 19775487 Jul 2 00:16:29.504546 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:16:29.504559 kernel: GPT:9289727 != 19775487 Jul 2 00:16:29.504570 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:16:29.504585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:16:29.450428 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:16:29.457063 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:16:29.481777 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:16:29.509364 kernel: libata version 3.00 loaded. Jul 2 00:16:29.490606 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:16:29.508271 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:16:29.508396 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:16:29.513939 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:16:29.518921 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:16:29.536949 kernel: scsi host0: ata_piix Jul 2 00:16:29.537145 kernel: scsi host1: ata_piix Jul 2 00:16:29.537310 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 00:16:29.537323 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 00:16:29.515117 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:16:29.516634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:16:29.519222 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:16:29.538808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:16:29.565543 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:16:29.545605 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:16:29.567324 kernel: AES CTR mode by8 optimization enabled Jul 2 00:16:29.572570 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (478) Jul 2 00:16:29.578319 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (467) Jul 2 00:16:29.589597 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:16:29.629042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:16:29.635875 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:16:29.645884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:16:29.651962 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:16:29.653500 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:16:29.670452 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:16:29.690925 kernel: ata2: found unknown device (class 0) Jul 2 00:16:29.690391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:16:29.704168 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 00:16:29.708413 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 00:16:29.724667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:16:29.795462 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 00:16:29.808294 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:16:29.808330 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 00:16:29.923849 disk-uuid[544]: Primary Header is updated. Jul 2 00:16:29.923849 disk-uuid[544]: Secondary Entries is updated. Jul 2 00:16:29.923849 disk-uuid[544]: Secondary Header is updated. Jul 2 00:16:29.946343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:16:29.950318 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:16:30.958323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:16:30.958579 disk-uuid[568]: The operation has completed successfully. Jul 2 00:16:30.987624 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:16:30.987805 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:16:31.018505 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:16:31.024142 sh[581]: Success Jul 2 00:16:31.040335 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 00:16:31.077692 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:16:31.109167 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:16:31.112998 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:16:31.125396 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:16:31.125446 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:16:31.125457 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:16:31.127858 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:16:31.127873 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:16:31.132519 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:16:31.134388 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:16:31.146625 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:16:31.148930 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:16:31.159470 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:16:31.159528 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:16:31.159539 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:16:31.162321 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:16:31.172334 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:16:31.174100 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:16:31.264423 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:16:31.274415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:16:31.292580 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:16:31.295918 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:16:31.313549 systemd-networkd[759]: lo: Link UP Jul 2 00:16:31.313560 systemd-networkd[759]: lo: Gained carrier Jul 2 00:16:31.315203 systemd-networkd[759]: Enumeration completed Jul 2 00:16:31.315609 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:16:31.315613 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:16:31.319508 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:16:31.320537 systemd-networkd[759]: eth0: Link UP Jul 2 00:16:31.320541 systemd-networkd[759]: eth0: Gained carrier Jul 2 00:16:31.320549 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:16:31.325401 systemd[1]: Reached target network.target - Network. Jul 2 00:16:31.342208 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:16:31.400237 ignition[762]: Ignition 2.18.0 Jul 2 00:16:31.400249 ignition[762]: Stage: fetch-offline Jul 2 00:16:31.400312 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:16:31.400325 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:16:31.400557 ignition[762]: parsed url from cmdline: "" Jul 2 00:16:31.400561 ignition[762]: no config URL provided Jul 2 00:16:31.400567 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:16:31.400579 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:16:31.400607 ignition[762]: op(1): [started] loading QEMU firmware config module Jul 2 00:16:31.400613 ignition[762]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:16:31.415852 ignition[762]: op(1): [finished] loading QEMU firmware config module Jul 2 00:16:31.460053 ignition[762]: parsing config with SHA512: 1cecf4ad998d3774a185f5694c8f051ba6af43bc3de7e15dfce43e8176f916327c2685c236d0ac82246bc448db7f5d0c903277d68db8da22f2d688d02a702f94 Jul 2 00:16:31.465609 unknown[762]: fetched base config from "system" Jul 2 00:16:31.465626 unknown[762]: fetched user config from "qemu" Jul 2 00:16:31.471887 ignition[762]: fetch-offline: fetch-offline passed Jul 2 00:16:31.473863 ignition[762]: Ignition finished successfully Jul 2 00:16:31.477630 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:16:31.477946 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:16:31.484685 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:16:31.500815 ignition[778]: Ignition 2.18.0 Jul 2 00:16:31.500827 ignition[778]: Stage: kargs Jul 2 00:16:31.500991 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:16:31.501013 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:16:31.501862 ignition[778]: kargs: kargs passed Jul 2 00:16:31.501909 ignition[778]: Ignition finished successfully Jul 2 00:16:31.508557 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:16:31.527486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:16:31.539910 ignition[787]: Ignition 2.18.0 Jul 2 00:16:31.539921 ignition[787]: Stage: disks Jul 2 00:16:31.540080 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:16:31.540091 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:16:31.540885 ignition[787]: disks: disks passed Jul 2 00:16:31.540929 ignition[787]: Ignition finished successfully Jul 2 00:16:31.546678 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:16:31.550348 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:16:31.553521 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:16:31.557035 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:16:31.559843 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:16:31.562739 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:16:31.578517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:16:31.595691 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:16:31.603261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:16:31.610392 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:16:31.724330 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:16:31.724787 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:16:31.726980 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:16:31.745391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:16:31.748064 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:16:31.750751 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:16:31.750797 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:16:31.760134 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Jul 2 00:16:31.760159 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:16:31.760170 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:16:31.760187 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:16:31.750821 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:16:31.762369 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:16:31.763560 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:16:31.765530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:16:31.769317 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:16:31.818110 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:16:31.823435 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:16:31.827545 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:16:31.832692 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:16:31.922498 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:16:31.937486 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:16:31.941398 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:16:31.946334 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:16:31.969201 ignition[919]: INFO : Ignition 2.18.0 Jul 2 00:16:31.969201 ignition[919]: INFO : Stage: mount Jul 2 00:16:31.969188 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:16:31.973180 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:16:31.973180 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:16:31.973180 ignition[919]: INFO : mount: mount passed Jul 2 00:16:31.973180 ignition[919]: INFO : Ignition finished successfully Jul 2 00:16:31.977466 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:16:31.992543 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:16:32.124615 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:16:32.141508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:16:32.149315 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (934) Jul 2 00:16:32.149341 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:16:32.150664 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:16:32.150677 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:16:32.154331 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:16:32.156155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:16:32.180602 ignition[951]: INFO : Ignition 2.18.0 Jul 2 00:16:32.180602 ignition[951]: INFO : Stage: files Jul 2 00:16:32.182648 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:16:32.182648 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:16:32.185804 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:16:32.187790 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:16:32.187790 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:16:32.193392 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:16:32.194893 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:16:32.196782 unknown[951]: wrote ssh authorized keys file for user: core Jul 2 00:16:32.197942 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:16:32.200512 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:16:32.202755 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:16:32.259419 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:16:32.317513 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:16:32.317513 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:16:32.322094 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:16:32.756371 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:16:32.844539 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:16:32.846808 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 00:16:33.149335 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:16:33.263936 systemd-networkd[759]: eth0: Gained IPv6LL Jul 2 00:16:33.533580 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 00:16:33.533580 ignition[951]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 00:16:33.538187 ignition[951]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:16:33.538187 ignition[951]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:16:33.538187 ignition[951]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 00:16:33.538187 ignition[951]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 00:16:33.538187 ignition[951]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:16:33.538187 ignition[951]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:16:33.538187 ignition[951]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 00:16:33.538187 ignition[951]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:16:33.565372 ignition[951]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:16:33.570085 ignition[951]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:16:33.571718 ignition[951]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:16:33.571718 ignition[951]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:16:33.571718 ignition[951]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:16:33.571718 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:16:33.571718 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:16:33.571718 ignition[951]: INFO : files: files passed Jul 2 00:16:33.571718 ignition[951]: INFO : Ignition finished successfully Jul 2 00:16:33.574040 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:16:33.587543 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:16:33.590402 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:16:33.592412 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:16:33.592549 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:16:33.602014 initrd-setup-root-after-ignition[980]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:16:33.604796 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:16:33.604796 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:16:33.608072 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:16:33.607457 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:16:33.610199 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:16:33.620519 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:16:33.644895 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:16:33.645076 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:16:33.648059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:16:33.649637 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:16:33.651736 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:16:33.652541 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:16:33.668451 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:16:33.682753 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:16:33.695783 systemd[1]: Stopped target network.target - Network. Jul 2 00:16:33.696851 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:16:33.698851 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:16:33.701195 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:16:33.703250 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:16:33.703389 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:16:33.705890 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:16:33.707512 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:16:33.709631 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:16:33.711714 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:16:33.713790 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:16:33.716007 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:16:33.718260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:16:33.739601 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:16:33.741652 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:16:33.743941 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:16:33.745773 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:16:33.745922 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:16:33.748281 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:16:33.749839 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:16:33.752019 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:16:33.752134 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:16:33.754340 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:16:33.754454 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:16:33.756890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:16:33.757010 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:16:33.758917 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:16:33.760729 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:16:33.764404 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:16:33.771171 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:16:33.781273 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:16:33.783446 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:16:33.783572 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:16:33.785777 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:16:33.785897 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:16:33.787963 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:16:33.788122 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:16:33.790564 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:16:33.790705 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:16:33.804461 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:16:33.805494 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:16:33.827456 ignition[1006]: INFO : Ignition 2.18.0 Jul 2 00:16:33.827456 ignition[1006]: INFO : Stage: umount Jul 2 00:16:33.827456 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:16:33.827456 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:16:33.827456 ignition[1006]: INFO : umount: umount passed Jul 2 00:16:33.827456 ignition[1006]: INFO : Ignition finished successfully Jul 2 00:16:33.805651 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:16:33.826228 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:16:33.827678 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:16:33.829871 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:16:33.831727 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:16:33.831850 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:16:33.833392 systemd-networkd[759]: eth0: DHCPv6 lease lost Jul 2 00:16:33.834267 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:16:33.834400 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:16:33.839184 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:16:33.840258 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:16:33.843719 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:16:33.843849 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:16:33.846376 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:16:33.846500 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:16:33.849998 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:16:33.850105 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:16:33.855748 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:16:33.855795 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:16:33.858006 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:16:33.858060 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:16:33.860381 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:16:33.860439 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:16:33.862880 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:16:33.862930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:16:33.865171 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:16:33.865224 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:16:33.878438 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:16:33.879824 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:16:33.879889 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:16:33.882394 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:16:33.882443 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:16:33.884665 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:16:33.884711 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:16:33.885841 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:16:33.885889 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:16:33.889977 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:16:33.893509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:16:33.902012 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:16:33.902166 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:16:33.915042 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:16:33.915266 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:16:33.919079 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:16:33.919138 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:16:33.921589 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:16:33.921637 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:16:33.923893 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:16:33.923964 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:16:33.926264 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:16:33.926339 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:16:33.928491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:16:33.928552 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:16:33.942427 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:16:33.954233 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:16:33.954335 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:16:33.955731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:16:33.955790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:16:33.958451 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:16:33.958578 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:16:34.382472 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:16:34.382618 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:16:34.385618 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:16:34.387484 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:16:34.387548 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:16:34.399446 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:16:34.407822 systemd[1]: Switching root. Jul 2 00:16:34.433486 systemd-journald[193]: Journal stopped Jul 2 00:16:36.274582 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 2 00:16:36.274652 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:16:36.274667 kernel: SELinux: policy capability open_perms=1 Jul 2 00:16:36.274679 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:16:36.275544 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:16:36.275569 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:16:36.275586 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:16:36.275598 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:16:36.275612 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:16:36.275623 kernel: audit: type=1403 audit(1719879395.407:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:16:36.275638 systemd[1]: Successfully loaded SELinux policy in 40.213ms. Jul 2 00:16:36.275665 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.594ms. Jul 2 00:16:36.275679 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:16:36.275695 systemd[1]: Detected virtualization kvm. Jul 2 00:16:36.275707 systemd[1]: Detected architecture x86-64. Jul 2 00:16:36.275726 systemd[1]: Detected first boot. Jul 2 00:16:36.275738 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:16:36.275750 zram_generator::config[1049]: No configuration found. Jul 2 00:16:36.275769 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:16:36.275781 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:16:36.275793 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:16:36.275807 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:16:36.275820 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:16:36.275832 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:16:36.275845 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:16:36.275857 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:16:36.275869 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:16:36.275881 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:16:36.275908 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:16:36.275922 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:16:36.275937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:16:36.275949 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:16:36.275961 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:16:36.275972 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:16:36.275985 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:16:36.276003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:16:36.276014 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:16:36.276027 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:16:36.276038 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:16:36.276053 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:16:36.276065 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:16:36.276077 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:16:36.276089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:16:36.276101 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:16:36.276113 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:16:36.276126 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:16:36.276140 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:16:36.276152 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:16:36.276164 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:16:36.276176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:16:36.276189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:16:36.276201 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:16:36.276213 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:16:36.276225 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:16:36.276237 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:16:36.276251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:16:36.276263 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:16:36.276275 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:16:36.276287 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:16:36.276313 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:16:36.276326 systemd[1]: Reached target machines.target - Containers. Jul 2 00:16:36.276337 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:16:36.276349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:16:36.276362 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:16:36.276377 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:16:36.276389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:16:36.276402 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:16:36.276413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:16:36.276425 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:16:36.276437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:16:36.276450 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:16:36.276463 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:16:36.276478 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:16:36.276490 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:16:36.276502 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:16:36.276514 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:16:36.276526 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:16:36.276577 systemd-journald[1111]: Collecting audit messages is disabled. Jul 2 00:16:36.276605 kernel: fuse: init (API version 7.39) Jul 2 00:16:36.276616 kernel: loop: module loaded Jul 2 00:16:36.276630 systemd-journald[1111]: Journal started Jul 2 00:16:36.276653 systemd-journald[1111]: Runtime Journal (/run/log/journal/b9d90d56e547495fb127b674b1cc858c) is 6.0M, max 48.4M, 42.3M free. Jul 2 00:16:36.029115 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:16:36.052054 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:16:36.052511 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:16:36.289606 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:16:36.295821 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:16:36.306096 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:16:36.306212 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:16:36.306246 systemd[1]: Stopped verity-setup.service. Jul 2 00:16:36.306267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:16:36.308512 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:16:36.309601 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:16:36.311369 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:16:36.313048 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:16:36.314411 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:16:36.315671 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:16:36.328162 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:16:36.330039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:16:36.330316 kernel: ACPI: bus type drm_connector registered Jul 2 00:16:36.331786 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:16:36.331979 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:16:36.333757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:16:36.333963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:16:36.335448 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:16:36.335640 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:16:36.337077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:16:36.337261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:16:36.338845 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:16:36.339047 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:16:36.340674 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:16:36.340861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:16:36.342316 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:16:36.343967 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:16:36.345554 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:16:36.359164 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:16:36.397469 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:16:36.400729 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:16:36.402182 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:16:36.402231 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:16:36.405017 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:16:36.407968 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:16:36.415418 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:16:36.417283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:16:36.419286 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:16:36.421722 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:16:36.423396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:16:36.428981 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:16:36.430844 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:16:36.433808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:16:36.451158 systemd-journald[1111]: Time spent on flushing to /var/log/journal/b9d90d56e547495fb127b674b1cc858c is 26.880ms for 946 entries. Jul 2 00:16:36.451158 systemd-journald[1111]: System Journal (/var/log/journal/b9d90d56e547495fb127b674b1cc858c) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:16:36.738383 systemd-journald[1111]: Received client request to flush runtime journal. Jul 2 00:16:36.738426 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:16:36.738444 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:16:36.738539 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:16:36.738559 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 00:16:36.738575 kernel: loop2: detected capacity change from 0 to 80568 Jul 2 00:16:36.738602 kernel: loop3: detected capacity change from 0 to 139904 Jul 2 00:16:36.439655 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:16:36.455248 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:16:36.459687 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:16:36.461593 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:16:36.484661 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:16:36.529565 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:16:36.541500 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:16:36.580520 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:16:36.717927 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:16:36.720373 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:16:36.734614 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:16:36.741140 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:16:36.747838 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:16:36.760490 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:16:36.778340 kernel: loop4: detected capacity change from 0 to 211296 Jul 2 00:16:36.794380 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:16:36.798704 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:16:36.804532 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:16:36.805148 (sd-merge)[1175]: Merged extensions into '/usr'. Jul 2 00:16:36.805958 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:16:36.807387 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:16:36.819626 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:16:36.821661 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:16:36.821749 systemd[1]: Reloading... Jul 2 00:16:36.852982 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jul 2 00:16:36.853342 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jul 2 00:16:36.881325 zram_generator::config[1210]: No configuration found. Jul 2 00:16:36.996371 ldconfig[1142]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:16:37.033392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:16:37.095311 systemd[1]: Reloading finished in 273 ms. Jul 2 00:16:37.135407 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:16:37.137193 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:16:37.139053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:16:37.154690 systemd[1]: Starting ensure-sysext.service... Jul 2 00:16:37.157066 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:16:37.163528 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:16:37.163549 systemd[1]: Reloading... Jul 2 00:16:37.181613 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:16:37.181927 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:16:37.182837 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:16:37.183223 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 2 00:16:37.183347 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jul 2 00:16:37.186870 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:16:37.186893 systemd-tmpfiles[1249]: Skipping /boot Jul 2 00:16:37.197411 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:16:37.197426 systemd-tmpfiles[1249]: Skipping /boot Jul 2 00:16:37.227364 zram_generator::config[1277]: No configuration found. Jul 2 00:16:37.472721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:16:37.538582 systemd[1]: Reloading finished in 374 ms. Jul 2 00:16:37.559640 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:16:37.572104 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:16:37.584432 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:16:37.588319 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:16:37.592468 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:16:37.601093 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:16:37.605677 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:16:37.609611 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:16:37.617730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:16:37.617969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:16:37.628345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:16:37.634154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:16:37.640096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:16:37.641652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:16:37.647156 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:16:37.649249 augenrules[1337]: No rules Jul 2 00:16:37.648692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:16:37.650596 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:16:37.653602 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:16:37.660009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:16:37.660622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:16:37.662721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:16:37.663077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:16:37.665074 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:16:37.665541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:16:37.676114 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:16:37.682572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:16:37.683096 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:16:37.683163 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jul 2 00:16:37.694661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:16:37.698006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:16:37.703032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:16:37.704612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:16:37.710555 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:16:37.711951 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:16:37.712049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:16:37.713086 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:16:37.715228 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:16:37.725286 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:16:37.727850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:16:37.728135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:16:37.730151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:16:37.730511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:16:37.732765 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:16:37.733260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:16:37.748608 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:16:37.753977 systemd[1]: Finished ensure-sysext.service. Jul 2 00:16:37.761108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:16:37.761253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:16:37.771699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:16:37.776462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:16:37.782336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:16:37.789456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:16:37.793472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:16:37.795653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:16:37.802679 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:16:37.803353 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Jul 2 00:16:37.803921 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:16:37.803952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:16:37.804568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:16:37.804756 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:16:37.806317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:16:37.808571 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:16:37.810415 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:16:37.810593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:16:37.825566 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:16:37.826014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:16:37.826075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:16:37.831184 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:16:37.831390 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:16:37.852338 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:16:37.861350 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:16:37.866343 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1379) Jul 2 00:16:37.867874 systemd-resolved[1318]: Positive Trust Anchors: Jul 2 00:16:37.867903 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:16:37.867939 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:16:37.877332 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:16:37.878260 systemd-resolved[1318]: Defaulting to hostname 'linux'. Jul 2 00:16:37.886462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:16:37.887844 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:16:37.900334 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:16:37.929288 systemd-networkd[1388]: lo: Link UP Jul 2 00:16:37.929780 systemd-networkd[1388]: lo: Gained carrier Jul 2 00:16:37.934511 systemd-networkd[1388]: Enumeration completed Jul 2 00:16:37.934679 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:16:37.935100 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:16:37.935106 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:16:37.936487 systemd-networkd[1388]: eth0: Link UP Jul 2 00:16:37.936561 systemd-networkd[1388]: eth0: Gained carrier Jul 2 00:16:37.936634 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:16:37.937502 systemd[1]: Reached target network.target - Network. Jul 2 00:16:37.945478 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:16:37.950386 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:16:37.965979 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:16:37.966397 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:16:37.966451 systemd-timesyncd[1389]: Initial clock synchronization to Tue 2024-07-02 00:16:37.961748 UTC. Jul 2 00:16:37.970472 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:16:37.975031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:16:38.036666 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:16:38.038753 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:16:38.047738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:16:38.056103 kernel: kvm_amd: TSC scaling supported Jul 2 00:16:38.056245 kernel: kvm_amd: Nested Virtualization enabled Jul 2 00:16:38.056318 kernel: kvm_amd: Nested Paging enabled Jul 2 00:16:38.056375 kernel: kvm_amd: LBR virtualization supported Jul 2 00:16:38.056424 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 2 00:16:38.056475 kernel: kvm_amd: Virtual GIF supported Jul 2 00:16:38.063727 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:16:38.087325 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:16:38.126462 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:16:38.159411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:16:38.176529 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:16:38.185198 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:16:38.213902 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:16:38.215516 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:16:38.216700 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:16:38.218137 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:16:38.219640 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:16:38.221536 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:16:38.222990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:16:38.224499 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:16:38.226001 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:16:38.226032 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:16:38.227120 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:16:38.229040 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:16:38.231980 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:16:38.241014 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:16:38.243616 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:16:38.245489 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:16:38.246821 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:16:38.247862 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:16:38.249012 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:16:38.249056 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:16:38.250250 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:16:38.253045 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:16:38.253741 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:16:38.258418 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:16:38.263598 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:16:38.265221 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:16:38.267646 jq[1424]: false Jul 2 00:16:38.269500 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:16:38.274134 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:16:38.278897 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:16:38.280532 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:16:38.286351 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:16:38.288057 extend-filesystems[1425]: Found loop3 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found loop4 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found loop5 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found sr0 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found vda Jul 2 00:16:38.290277 extend-filesystems[1425]: Found vda1 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found vda2 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found vda3 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found usr Jul 2 00:16:38.290277 extend-filesystems[1425]: Found vda4 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found vda6 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found vda7 Jul 2 00:16:38.290277 extend-filesystems[1425]: Found vda9 Jul 2 00:16:38.290277 extend-filesystems[1425]: Checking size of /dev/vda9 Jul 2 00:16:38.288369 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:16:38.318504 extend-filesystems[1425]: Resized partition /dev/vda9 Jul 2 00:16:38.301956 dbus-daemon[1423]: [system] SELinux support is enabled Jul 2 00:16:38.288964 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:16:38.297418 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:16:38.315783 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:16:38.331871 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1366) Jul 2 00:16:38.320548 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:16:38.331984 jq[1443]: true Jul 2 00:16:38.330272 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:16:38.335873 extend-filesystems[1442]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:16:38.347514 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:16:38.340891 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:16:38.341162 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:16:38.341784 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:16:38.342084 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:16:38.348364 update_engine[1438]: I0702 00:16:38.348024 1438 main.cc:92] Flatcar Update Engine starting Jul 2 00:16:38.353804 update_engine[1438]: I0702 00:16:38.350040 1438 update_check_scheduler.cc:74] Next update check in 3m30s Jul 2 00:16:38.354258 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:16:38.354604 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:16:38.371581 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:16:38.378712 systemd-logind[1433]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:16:38.379084 systemd-logind[1433]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:16:38.381452 systemd-logind[1433]: New seat seat0. Jul 2 00:16:38.383562 jq[1450]: true Jul 2 00:16:38.386337 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:16:38.387945 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:16:38.405180 dbus-daemon[1423]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 00:16:38.413339 tar[1448]: linux-amd64/helm Jul 2 00:16:38.417908 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:16:38.417908 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:16:38.417908 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:16:38.424191 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Jul 2 00:16:38.419323 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:16:38.420670 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:16:38.422877 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:16:38.428926 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:16:38.430918 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:16:38.431063 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:16:38.432677 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:16:38.432811 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:16:38.442588 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:16:38.468573 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:16:38.487080 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:16:38.489013 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:16:38.492371 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:16:38.600516 containerd[1452]: time="2024-07-02T00:16:38.600391470Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:16:38.627192 containerd[1452]: time="2024-07-02T00:16:38.627020797Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:16:38.627192 containerd[1452]: time="2024-07-02T00:16:38.627084339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:16:38.629455 containerd[1452]: time="2024-07-02T00:16:38.629181545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:16:38.629455 containerd[1452]: time="2024-07-02T00:16:38.629227128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:16:38.629583 containerd[1452]: time="2024-07-02T00:16:38.629525883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:16:38.629583 containerd[1452]: time="2024-07-02T00:16:38.629542929Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:16:38.629716 containerd[1452]: time="2024-07-02T00:16:38.629645283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:16:38.629750 containerd[1452]: time="2024-07-02T00:16:38.629738461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:16:38.629771 containerd[1452]: time="2024-07-02T00:16:38.629752444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:16:38.629869 containerd[1452]: time="2024-07-02T00:16:38.629843098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:16:38.630129 containerd[1452]: time="2024-07-02T00:16:38.630101889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:16:38.630157 containerd[1452]: time="2024-07-02T00:16:38.630127360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:16:38.630157 containerd[1452]: time="2024-07-02T00:16:38.630138858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:16:38.630356 containerd[1452]: time="2024-07-02T00:16:38.630262966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:16:38.630356 containerd[1452]: time="2024-07-02T00:16:38.630280944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:16:38.630406 containerd[1452]: time="2024-07-02T00:16:38.630371058Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:16:38.630406 containerd[1452]: time="2024-07-02T00:16:38.630384379Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:16:38.639133 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:16:38.663630 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665612707Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665675297Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665690762Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665731296Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665745719Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665757508Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665769196Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665940508Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665956905Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665974883Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.665990679Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.666008307Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.666026215Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:16:38.666690 containerd[1452]: time="2024-07-02T00:16:38.666038915Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666050524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666065968Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666079190Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666091409Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666102346Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666214585Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666495833Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666530999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666544690Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666567115Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666871530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667034 containerd[1452]: time="2024-07-02T00:16:38.666900987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667249 containerd[1452]: time="2024-07-02T00:16:38.667059760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667249 containerd[1452]: time="2024-07-02T00:16:38.667103789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667249 containerd[1452]: time="2024-07-02T00:16:38.667131003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667249 containerd[1452]: time="2024-07-02T00:16:38.667152888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667249 containerd[1452]: time="2024-07-02T00:16:38.667176665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667249 containerd[1452]: time="2024-07-02T00:16:38.667201314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667249 containerd[1452]: time="2024-07-02T00:16:38.667230672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:16:38.667558 containerd[1452]: time="2024-07-02T00:16:38.667521875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667733 containerd[1452]: time="2024-07-02T00:16:38.667714451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667809 containerd[1452]: time="2024-07-02T00:16:38.667792635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.667888 containerd[1452]: time="2024-07-02T00:16:38.667869417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.668350 containerd[1452]: time="2024-07-02T00:16:38.667945178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.668350 containerd[1452]: time="2024-07-02T00:16:38.667969617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.668350 containerd[1452]: time="2024-07-02T00:16:38.668001798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.668350 containerd[1452]: time="2024-07-02T00:16:38.668019907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:16:38.668603 containerd[1452]: time="2024-07-02T00:16:38.668333186Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:16:38.668603 containerd[1452]: time="2024-07-02T00:16:38.668592919Z" level=info msg="Connect containerd service" Jul 2 00:16:38.668923 containerd[1452]: time="2024-07-02T00:16:38.668641957Z" level=info msg="using legacy CRI server" Jul 2 00:16:38.668923 containerd[1452]: time="2024-07-02T00:16:38.668660386Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:16:38.668923 containerd[1452]: time="2024-07-02T00:16:38.668770822Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:16:38.669613 containerd[1452]: time="2024-07-02T00:16:38.669494513Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:16:38.669613 containerd[1452]: time="2024-07-02T00:16:38.669546536Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:16:38.669613 containerd[1452]: time="2024-07-02T00:16:38.669574811Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:16:38.669613 containerd[1452]: time="2024-07-02T00:16:38.669587732Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:16:38.669730 containerd[1452]: time="2024-07-02T00:16:38.669615025Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:16:38.669894 containerd[1452]: time="2024-07-02T00:16:38.669801422Z" level=info msg="Start subscribing containerd event" Jul 2 00:16:38.670191 containerd[1452]: time="2024-07-02T00:16:38.670168866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:16:38.670244 containerd[1452]: time="2024-07-02T00:16:38.670224986Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:16:38.671382 containerd[1452]: time="2024-07-02T00:16:38.671361574Z" level=info msg="Start recovering state" Jul 2 00:16:38.671460 containerd[1452]: time="2024-07-02T00:16:38.671440940Z" level=info msg="Start event monitor" Jul 2 00:16:38.671483 containerd[1452]: time="2024-07-02T00:16:38.671461913Z" level=info msg="Start snapshots syncer" Jul 2 00:16:38.671483 containerd[1452]: time="2024-07-02T00:16:38.671472039Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:16:38.671529 containerd[1452]: time="2024-07-02T00:16:38.671483027Z" level=info msg="Start streaming server" Jul 2 00:16:38.671586 containerd[1452]: time="2024-07-02T00:16:38.671553499Z" level=info msg="containerd successfully booted in 0.073211s" Jul 2 00:16:38.679760 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:16:38.682353 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:33736.service - OpenSSH per-connection server daemon (10.0.0.1:33736). Jul 2 00:16:38.684090 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:16:38.687692 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:16:38.688053 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:16:38.695005 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:16:38.719536 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:16:38.732795 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:16:38.735791 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:16:38.737497 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:16:38.757534 sshd[1505]: Accepted publickey for core from 10.0.0.1 port 33736 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:38.759864 sshd[1505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:38.770165 systemd-logind[1433]: New session 1 of user core. Jul 2 00:16:38.772125 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:16:38.795767 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:16:38.813003 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:16:38.824743 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:16:38.829153 (systemd)[1516]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:38.852195 tar[1448]: linux-amd64/LICENSE Jul 2 00:16:38.852318 tar[1448]: linux-amd64/README.md Jul 2 00:16:38.867972 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:16:38.947371 systemd[1516]: Queued start job for default target default.target. Jul 2 00:16:38.959716 systemd[1516]: Created slice app.slice - User Application Slice. Jul 2 00:16:38.959747 systemd[1516]: Reached target paths.target - Paths. Jul 2 00:16:38.959764 systemd[1516]: Reached target timers.target - Timers. Jul 2 00:16:38.961398 systemd[1516]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:16:38.976159 systemd[1516]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:16:38.976327 systemd[1516]: Reached target sockets.target - Sockets. Jul 2 00:16:38.976351 systemd[1516]: Reached target basic.target - Basic System. Jul 2 00:16:38.976397 systemd[1516]: Reached target default.target - Main User Target. Jul 2 00:16:38.976439 systemd[1516]: Startup finished in 138ms. Jul 2 00:16:38.977055 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:16:38.979977 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:16:39.046618 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:49642.service - OpenSSH per-connection server daemon (10.0.0.1:49642). Jul 2 00:16:39.089549 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 49642 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:39.091475 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:39.095723 systemd-logind[1433]: New session 2 of user core. Jul 2 00:16:39.113610 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:16:39.169981 sshd[1530]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:39.181200 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:49642.service: Deactivated successfully. Jul 2 00:16:39.183018 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:16:39.184581 systemd-logind[1433]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:16:39.194655 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:49658.service - OpenSSH per-connection server daemon (10.0.0.1:49658). Jul 2 00:16:39.197125 systemd-logind[1433]: Removed session 2. Jul 2 00:16:39.229986 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 49658 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:39.231856 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:39.236227 systemd-logind[1433]: New session 3 of user core. Jul 2 00:16:39.245524 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:16:39.303187 sshd[1537]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:39.307707 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:49658.service: Deactivated successfully. Jul 2 00:16:39.309729 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:16:39.310542 systemd-logind[1433]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:16:39.311568 systemd-logind[1433]: Removed session 3. Jul 2 00:16:39.344507 systemd-networkd[1388]: eth0: Gained IPv6LL Jul 2 00:16:39.348056 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:16:39.350254 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:16:39.369633 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:16:39.372601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:39.375211 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:16:39.398630 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:16:39.400919 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:16:39.401213 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:16:39.405469 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:16:40.006466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:40.008286 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:16:40.011114 systemd[1]: Startup finished in 1.133s (kernel) + 7.620s (initrd) + 4.641s (userspace) = 13.395s. Jul 2 00:16:40.033779 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:16:40.555872 kubelet[1567]: E0702 00:16:40.555757 1567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:16:40.560833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:16:40.561060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:16:40.561440 systemd[1]: kubelet.service: Consumed 1.032s CPU time. Jul 2 00:16:49.312409 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:52098.service - OpenSSH per-connection server daemon (10.0.0.1:52098). Jul 2 00:16:49.350767 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:49.352466 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:49.356381 systemd-logind[1433]: New session 4 of user core. Jul 2 00:16:49.371431 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:16:49.426623 sshd[1582]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:49.447754 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:52098.service: Deactivated successfully. Jul 2 00:16:49.450130 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:16:49.452326 systemd-logind[1433]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:16:49.464585 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:52102.service - OpenSSH per-connection server daemon (10.0.0.1:52102). Jul 2 00:16:49.465681 systemd-logind[1433]: Removed session 4. Jul 2 00:16:49.498590 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 52102 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:49.500032 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:49.504639 systemd-logind[1433]: New session 5 of user core. Jul 2 00:16:49.517437 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:16:49.567700 sshd[1589]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:49.578422 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:52102.service: Deactivated successfully. Jul 2 00:16:49.580253 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:16:49.582099 systemd-logind[1433]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:16:49.592831 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:52114.service - OpenSSH per-connection server daemon (10.0.0.1:52114). Jul 2 00:16:49.593931 systemd-logind[1433]: Removed session 5. Jul 2 00:16:49.627777 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 52114 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:49.629590 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:49.633816 systemd-logind[1433]: New session 6 of user core. Jul 2 00:16:49.648546 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:16:49.704823 sshd[1596]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:49.714529 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:52114.service: Deactivated successfully. Jul 2 00:16:49.716470 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:16:49.718115 systemd-logind[1433]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:16:49.719520 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:52120.service - OpenSSH per-connection server daemon (10.0.0.1:52120). Jul 2 00:16:49.720370 systemd-logind[1433]: Removed session 6. Jul 2 00:16:49.757270 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 52120 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:49.758769 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:49.763104 systemd-logind[1433]: New session 7 of user core. Jul 2 00:16:49.772460 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:16:49.832179 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:16:49.832481 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:16:49.852529 sudo[1606]: pam_unix(sudo:session): session closed for user root Jul 2 00:16:49.855176 sshd[1603]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:49.869329 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:52120.service: Deactivated successfully. Jul 2 00:16:49.871230 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:16:49.872940 systemd-logind[1433]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:16:49.874370 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:52124.service - OpenSSH per-connection server daemon (10.0.0.1:52124). Jul 2 00:16:49.875195 systemd-logind[1433]: Removed session 7. Jul 2 00:16:49.913637 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 52124 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:49.915728 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:49.920035 systemd-logind[1433]: New session 8 of user core. Jul 2 00:16:49.929500 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:16:49.983560 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:16:49.983838 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:16:49.987397 sudo[1615]: pam_unix(sudo:session): session closed for user root Jul 2 00:16:49.993323 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:16:49.993600 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:16:50.011646 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:16:50.013413 auditctl[1618]: No rules Jul 2 00:16:50.013833 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:16:50.014094 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:16:50.016794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:16:50.050063 augenrules[1636]: No rules Jul 2 00:16:50.052082 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:16:50.053612 sudo[1614]: pam_unix(sudo:session): session closed for user root Jul 2 00:16:50.055932 sshd[1611]: pam_unix(sshd:session): session closed for user core Jul 2 00:16:50.067784 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:52124.service: Deactivated successfully. Jul 2 00:16:50.070194 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:16:50.071848 systemd-logind[1433]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:16:50.079588 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:52136.service - OpenSSH per-connection server daemon (10.0.0.1:52136). Jul 2 00:16:50.080631 systemd-logind[1433]: Removed session 8. Jul 2 00:16:50.115544 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 52136 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:16:50.117514 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:16:50.122694 systemd-logind[1433]: New session 9 of user core. Jul 2 00:16:50.144605 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:16:50.200018 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:16:50.200336 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:16:50.310605 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:16:50.310676 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:16:50.570829 dockerd[1657]: time="2024-07-02T00:16:50.570665384Z" level=info msg="Starting up" Jul 2 00:16:50.572118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:16:50.582378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:16:50.808542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:16:50.813338 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:16:50.961824 kubelet[1677]: E0702 00:16:50.961644 1677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:16:50.969728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:16:50.970012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:16:51.097249 dockerd[1657]: time="2024-07-02T00:16:51.097183553Z" level=info msg="Loading containers: start." Jul 2 00:16:51.226326 kernel: Initializing XFRM netlink socket Jul 2 00:16:51.305635 systemd-networkd[1388]: docker0: Link UP Jul 2 00:16:51.324806 dockerd[1657]: time="2024-07-02T00:16:51.324769428Z" level=info msg="Loading containers: done." Jul 2 00:16:51.371729 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2424096499-merged.mount: Deactivated successfully. Jul 2 00:16:51.375358 dockerd[1657]: time="2024-07-02T00:16:51.375262938Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:16:51.375542 dockerd[1657]: time="2024-07-02T00:16:51.375513507Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:16:51.375651 dockerd[1657]: time="2024-07-02T00:16:51.375629851Z" level=info msg="Daemon has completed initialization" Jul 2 00:16:51.409028 dockerd[1657]: time="2024-07-02T00:16:51.408954436Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:16:51.409226 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:16:52.123165 containerd[1452]: time="2024-07-02T00:16:52.123122419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:16:52.827241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2595083936.mount: Deactivated successfully. Jul 2 00:16:54.529827 containerd[1452]: time="2024-07-02T00:16:54.529752942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:54.530567 containerd[1452]: time="2024-07-02T00:16:54.530542564Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jul 2 00:16:54.531907 containerd[1452]: time="2024-07-02T00:16:54.531868316Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:54.535120 containerd[1452]: time="2024-07-02T00:16:54.535065991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:54.536313 containerd[1452]: time="2024-07-02T00:16:54.536231579Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 2.413065342s" Jul 2 00:16:54.536313 containerd[1452]: time="2024-07-02T00:16:54.536281277Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 00:16:54.562177 containerd[1452]: time="2024-07-02T00:16:54.562134877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:16:57.829231 containerd[1452]: time="2024-07-02T00:16:57.829147275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:57.830703 containerd[1452]: time="2024-07-02T00:16:57.830623041Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jul 2 00:16:57.834938 containerd[1452]: time="2024-07-02T00:16:57.834805958Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:57.839107 containerd[1452]: time="2024-07-02T00:16:57.838974530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:57.840280 containerd[1452]: time="2024-07-02T00:16:57.840218821Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 3.278037291s" Jul 2 00:16:57.840280 containerd[1452]: time="2024-07-02T00:16:57.840273779Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 00:16:57.864173 containerd[1452]: time="2024-07-02T00:16:57.864116165Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:16:59.559893 containerd[1452]: time="2024-07-02T00:16:59.559800117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:59.602267 containerd[1452]: time="2024-07-02T00:16:59.602200104Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jul 2 00:16:59.647016 containerd[1452]: time="2024-07-02T00:16:59.646947085Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:59.690735 containerd[1452]: time="2024-07-02T00:16:59.690665853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:16:59.691871 containerd[1452]: time="2024-07-02T00:16:59.691816887Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.827661793s" Jul 2 00:16:59.691937 containerd[1452]: time="2024-07-02T00:16:59.691875233Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 00:16:59.714703 containerd[1452]: time="2024-07-02T00:16:59.714657438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:17:01.012188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:17:01.022453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:01.179252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:01.184316 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:17:01.244214 kubelet[1904]: E0702 00:17:01.244091 1904 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:17:01.249481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:17:01.249700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:17:02.635050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710347387.mount: Deactivated successfully. Jul 2 00:17:04.827809 containerd[1452]: time="2024-07-02T00:17:04.827704860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:04.830840 containerd[1452]: time="2024-07-02T00:17:04.830781548Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jul 2 00:17:04.834149 containerd[1452]: time="2024-07-02T00:17:04.834068250Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:04.839586 containerd[1452]: time="2024-07-02T00:17:04.839531227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:04.840388 containerd[1452]: time="2024-07-02T00:17:04.840337687Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 5.125630428s" Jul 2 00:17:04.840388 containerd[1452]: time="2024-07-02T00:17:04.840370618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 00:17:04.994802 containerd[1452]: time="2024-07-02T00:17:04.994285601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:17:05.723590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500656941.mount: Deactivated successfully. Jul 2 00:17:08.033869 containerd[1452]: time="2024-07-02T00:17:08.033789964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:08.041403 containerd[1452]: time="2024-07-02T00:17:08.041328007Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:17:08.052207 containerd[1452]: time="2024-07-02T00:17:08.052130519Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:08.071183 containerd[1452]: time="2024-07-02T00:17:08.070990496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:08.073465 containerd[1452]: time="2024-07-02T00:17:08.073378548Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.079018151s" Jul 2 00:17:08.073465 containerd[1452]: time="2024-07-02T00:17:08.073443918Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:17:08.098457 containerd[1452]: time="2024-07-02T00:17:08.098162626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:17:08.606085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470884816.mount: Deactivated successfully. Jul 2 00:17:08.613598 containerd[1452]: time="2024-07-02T00:17:08.613546323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:08.614893 containerd[1452]: time="2024-07-02T00:17:08.614848593Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:17:08.619425 containerd[1452]: time="2024-07-02T00:17:08.619356535Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:08.622125 containerd[1452]: time="2024-07-02T00:17:08.622074252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:08.622922 containerd[1452]: time="2024-07-02T00:17:08.622869281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 524.662804ms" Jul 2 00:17:08.622922 containerd[1452]: time="2024-07-02T00:17:08.622914734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:17:08.647388 containerd[1452]: time="2024-07-02T00:17:08.647347618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:17:09.495949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056269718.mount: Deactivated successfully. Jul 2 00:17:11.262194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:17:11.271705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:11.431047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:11.436902 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:17:11.833786 kubelet[2037]: E0702 00:17:11.833690 2037 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:17:11.839097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:17:11.839324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:17:13.324689 containerd[1452]: time="2024-07-02T00:17:13.324625384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:13.325551 containerd[1452]: time="2024-07-02T00:17:13.325496011Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:17:13.326975 containerd[1452]: time="2024-07-02T00:17:13.326928587Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:13.332046 containerd[1452]: time="2024-07-02T00:17:13.331954807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:13.333437 containerd[1452]: time="2024-07-02T00:17:13.333388744Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.685999078s" Jul 2 00:17:13.333500 containerd[1452]: time="2024-07-02T00:17:13.333436783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:17:16.100767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:16.113641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:16.134355 systemd[1]: Reloading requested from client PID 2130 ('systemctl') (unit session-9.scope)... Jul 2 00:17:16.134373 systemd[1]: Reloading... Jul 2 00:17:16.226364 zram_generator::config[2170]: No configuration found. Jul 2 00:17:16.709588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:16.797819 systemd[1]: Reloading finished in 662 ms. Jul 2 00:17:16.854287 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:17:16.854559 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:17:16.854838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:16.857657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:17.016054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:17.022908 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:17:17.068223 kubelet[2216]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:17.068223 kubelet[2216]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:17:17.068223 kubelet[2216]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:17.068667 kubelet[2216]: I0702 00:17:17.068254 2216 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:17:17.522605 kubelet[2216]: I0702 00:17:17.522551 2216 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:17:17.522605 kubelet[2216]: I0702 00:17:17.522592 2216 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:17:17.522900 kubelet[2216]: I0702 00:17:17.522872 2216 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:17:17.541974 kubelet[2216]: E0702 00:17:17.541919 2216 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.544254 kubelet[2216]: I0702 00:17:17.544115 2216 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:17:17.558383 kubelet[2216]: I0702 00:17:17.558336 2216 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:17:17.559635 kubelet[2216]: I0702 00:17:17.559597 2216 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:17:17.559857 kubelet[2216]: I0702 00:17:17.559817 2216 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:17:17.560258 kubelet[2216]: I0702 00:17:17.560225 2216 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:17:17.560258 kubelet[2216]: I0702 00:17:17.560242 2216 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:17:17.560433 kubelet[2216]: I0702 00:17:17.560405 2216 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:17:17.560529 kubelet[2216]: I0702 00:17:17.560499 2216 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:17:17.560529 kubelet[2216]: I0702 00:17:17.560517 2216 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:17:17.560594 kubelet[2216]: I0702 00:17:17.560544 2216 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:17:17.560594 kubelet[2216]: I0702 00:17:17.560560 2216 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:17:17.561408 kubelet[2216]: W0702 00:17:17.561338 2216 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.561408 kubelet[2216]: W0702 00:17:17.561374 2216 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.561408 kubelet[2216]: E0702 00:17:17.561410 2216 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.561408 kubelet[2216]: E0702 00:17:17.561435 2216 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.562668 kubelet[2216]: I0702 00:17:17.562627 2216 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:17:17.565272 kubelet[2216]: I0702 00:17:17.565232 2216 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:17:17.566393 kubelet[2216]: W0702 00:17:17.566358 2216 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:17:17.567698 kubelet[2216]: I0702 00:17:17.567670 2216 server.go:1256] "Started kubelet" Jul 2 00:17:17.568322 kubelet[2216]: I0702 00:17:17.567999 2216 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:17:17.568322 kubelet[2216]: I0702 00:17:17.568197 2216 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:17:17.568322 kubelet[2216]: I0702 00:17:17.568273 2216 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:17:17.569043 kubelet[2216]: I0702 00:17:17.569016 2216 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:17:17.569767 kubelet[2216]: I0702 00:17:17.569178 2216 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:17:17.571599 kubelet[2216]: I0702 00:17:17.571560 2216 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:17:17.572419 kubelet[2216]: I0702 00:17:17.572393 2216 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:17:17.572486 kubelet[2216]: I0702 00:17:17.572472 2216 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:17:17.574676 kubelet[2216]: W0702 00:17:17.574527 2216 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.574676 kubelet[2216]: E0702 00:17:17.574618 2216 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.575081 kubelet[2216]: I0702 00:17:17.575055 2216 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:17:17.575190 kubelet[2216]: I0702 00:17:17.575160 2216 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:17:17.577226 kubelet[2216]: I0702 00:17:17.577201 2216 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:17:17.577657 kubelet[2216]: E0702 00:17:17.577632 2216 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:17:17.582746 kubelet[2216]: E0702 00:17:17.582538 2216 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Jul 2 00:17:17.583752 kubelet[2216]: E0702 00:17:17.583699 2216 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3d3c74b7b4c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:17:17.567636673 +0000 UTC m=+0.539741671,LastTimestamp:2024-07-02 00:17:17.567636673 +0000 UTC m=+0.539741671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:17:17.594320 kubelet[2216]: I0702 00:17:17.593159 2216 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:17:17.594320 kubelet[2216]: I0702 00:17:17.593179 2216 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:17:17.594320 kubelet[2216]: I0702 00:17:17.593194 2216 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:17:17.594657 kubelet[2216]: I0702 00:17:17.594624 2216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:17:17.596001 kubelet[2216]: I0702 00:17:17.595964 2216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:17:17.596001 kubelet[2216]: I0702 00:17:17.595995 2216 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:17:17.596087 kubelet[2216]: I0702 00:17:17.596012 2216 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:17:17.596087 kubelet[2216]: E0702 00:17:17.596060 2216 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:17:17.596584 kubelet[2216]: W0702 00:17:17.596538 2216 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.596584 kubelet[2216]: E0702 00:17:17.596585 2216 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:17.673877 kubelet[2216]: I0702 00:17:17.673839 2216 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:17:17.674312 kubelet[2216]: E0702 00:17:17.674257 2216 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 2 00:17:17.696566 kubelet[2216]: E0702 00:17:17.696509 2216 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:17:17.783627 kubelet[2216]: E0702 00:17:17.783497 2216 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Jul 2 00:17:17.875970 kubelet[2216]: I0702 00:17:17.875938 2216 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:17:17.876268 kubelet[2216]: E0702 00:17:17.876238 2216 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 2 00:17:17.897465 kubelet[2216]: E0702 00:17:17.897413 2216 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:17:18.184797 kubelet[2216]: E0702 00:17:18.184651 2216 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Jul 2 00:17:18.278513 kubelet[2216]: I0702 00:17:18.278467 2216 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:17:18.278974 kubelet[2216]: E0702 00:17:18.278934 2216 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 2 00:17:18.292966 kubelet[2216]: I0702 00:17:18.292899 2216 policy_none.go:49] "None policy: Start" Jul 2 00:17:18.293833 kubelet[2216]: I0702 00:17:18.293800 2216 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:17:18.293906 kubelet[2216]: I0702 00:17:18.293850 2216 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:17:18.298047 kubelet[2216]: E0702 00:17:18.298010 2216 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:17:18.374791 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:17:18.393452 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:17:18.397970 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:17:18.415877 kubelet[2216]: I0702 00:17:18.415705 2216 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:17:18.416074 kubelet[2216]: I0702 00:17:18.416013 2216 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:17:18.417175 kubelet[2216]: E0702 00:17:18.417147 2216 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:17:18.471118 kubelet[2216]: W0702 00:17:18.471012 2216 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:18.471118 kubelet[2216]: E0702 00:17:18.471050 2216 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:18.686820 kubelet[2216]: W0702 00:17:18.686749 2216 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:18.686960 kubelet[2216]: E0702 00:17:18.686843 2216 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:18.695058 kubelet[2216]: W0702 00:17:18.695011 2216 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:18.695058 kubelet[2216]: E0702 00:17:18.695048 2216 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:18.986190 kubelet[2216]: E0702 00:17:18.986117 2216 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" Jul 2 00:17:19.026777 kubelet[2216]: W0702 00:17:19.026730 2216 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:19.026777 kubelet[2216]: E0702 00:17:19.026772 2216 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:19.081126 kubelet[2216]: I0702 00:17:19.081079 2216 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:17:19.081465 kubelet[2216]: E0702 00:17:19.081447 2216 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 2 00:17:19.098860 kubelet[2216]: I0702 00:17:19.098769 2216 topology_manager.go:215] "Topology Admit Handler" podUID="54ffb28a53cc3bbce191b18321a2f79f" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:17:19.100146 kubelet[2216]: I0702 00:17:19.100125 2216 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:17:19.100934 kubelet[2216]: I0702 00:17:19.100890 2216 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:17:19.107761 systemd[1]: Created slice kubepods-burstable-pod54ffb28a53cc3bbce191b18321a2f79f.slice - libcontainer container kubepods-burstable-pod54ffb28a53cc3bbce191b18321a2f79f.slice. Jul 2 00:17:19.130240 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice - libcontainer container kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jul 2 00:17:19.135104 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice - libcontainer container kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jul 2 00:17:19.178178 kubelet[2216]: I0702 00:17:19.178127 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:17:19.178178 kubelet[2216]: I0702 00:17:19.178181 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54ffb28a53cc3bbce191b18321a2f79f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"54ffb28a53cc3bbce191b18321a2f79f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:17:19.178397 kubelet[2216]: I0702 00:17:19.178205 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54ffb28a53cc3bbce191b18321a2f79f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"54ffb28a53cc3bbce191b18321a2f79f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:17:19.178397 kubelet[2216]: I0702 00:17:19.178230 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:19.178397 kubelet[2216]: I0702 00:17:19.178254 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:19.178397 kubelet[2216]: I0702 00:17:19.178339 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:19.178536 kubelet[2216]: I0702 00:17:19.178405 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:19.178536 kubelet[2216]: I0702 00:17:19.178434 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:19.178536 kubelet[2216]: I0702 00:17:19.178461 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54ffb28a53cc3bbce191b18321a2f79f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"54ffb28a53cc3bbce191b18321a2f79f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:17:19.427900 kubelet[2216]: E0702 00:17:19.427842 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:19.428708 containerd[1452]: time="2024-07-02T00:17:19.428655889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:54ffb28a53cc3bbce191b18321a2f79f,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:19.432833 kubelet[2216]: E0702 00:17:19.432814 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:19.433263 containerd[1452]: time="2024-07-02T00:17:19.433219159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:19.437555 kubelet[2216]: E0702 00:17:19.437519 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:19.437995 containerd[1452]: time="2024-07-02T00:17:19.437960117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:19.590990 kubelet[2216]: E0702 00:17:19.590905 2216 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.74:6443: connect: connection refused Jul 2 00:17:20.177350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443075228.mount: Deactivated successfully. Jul 2 00:17:20.291595 containerd[1452]: time="2024-07-02T00:17:20.291491486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:20.301380 containerd[1452]: time="2024-07-02T00:17:20.301307935Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:17:20.302538 containerd[1452]: time="2024-07-02T00:17:20.302479622Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:20.303687 containerd[1452]: time="2024-07-02T00:17:20.303649685Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:20.304691 containerd[1452]: time="2024-07-02T00:17:20.304643921Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:20.305657 containerd[1452]: time="2024-07-02T00:17:20.305627217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:17:20.306794 containerd[1452]: time="2024-07-02T00:17:20.306755513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:17:20.308765 containerd[1452]: time="2024-07-02T00:17:20.308708379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:17:20.310891 containerd[1452]: time="2024-07-02T00:17:20.310850678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 872.798409ms" Jul 2 00:17:20.311335 containerd[1452]: time="2024-07-02T00:17:20.311311774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 882.521695ms" Jul 2 00:17:20.314527 containerd[1452]: time="2024-07-02T00:17:20.314479727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 881.144673ms" Jul 2 00:17:20.483205 containerd[1452]: time="2024-07-02T00:17:20.482872884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:20.483205 containerd[1452]: time="2024-07-02T00:17:20.482922917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:20.483205 containerd[1452]: time="2024-07-02T00:17:20.482938346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:20.483205 containerd[1452]: time="2024-07-02T00:17:20.482949156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:20.483659 containerd[1452]: time="2024-07-02T00:17:20.482895516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:20.484150 containerd[1452]: time="2024-07-02T00:17:20.484005617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:20.484150 containerd[1452]: time="2024-07-02T00:17:20.484057724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:20.484150 containerd[1452]: time="2024-07-02T00:17:20.484089343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:20.484150 containerd[1452]: time="2024-07-02T00:17:20.484103319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:20.484912 containerd[1452]: time="2024-07-02T00:17:20.484718652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:20.484912 containerd[1452]: time="2024-07-02T00:17:20.484746252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:20.484912 containerd[1452]: time="2024-07-02T00:17:20.484756912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:20.503454 systemd[1]: Started cri-containerd-70c3bcfdbe438b4faaf2a005aa8665ab0bb5ead2e4e4ca571b119aa34ef6ccb4.scope - libcontainer container 70c3bcfdbe438b4faaf2a005aa8665ab0bb5ead2e4e4ca571b119aa34ef6ccb4. Jul 2 00:17:20.509155 systemd[1]: Started cri-containerd-71302626372ba3e362155714f52221aa74f630c7712a31ee8bc1554fbabd9b2d.scope - libcontainer container 71302626372ba3e362155714f52221aa74f630c7712a31ee8bc1554fbabd9b2d. Jul 2 00:17:20.512047 systemd[1]: Started cri-containerd-b69046208d6231cffc00ec63bd1456d5bc2995571d186f51cc120703180f2480.scope - libcontainer container b69046208d6231cffc00ec63bd1456d5bc2995571d186f51cc120703180f2480. Jul 2 00:17:20.562091 containerd[1452]: time="2024-07-02T00:17:20.562038233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:54ffb28a53cc3bbce191b18321a2f79f,Namespace:kube-system,Attempt:0,} returns sandbox id \"70c3bcfdbe438b4faaf2a005aa8665ab0bb5ead2e4e4ca571b119aa34ef6ccb4\"" Jul 2 00:17:20.562262 containerd[1452]: time="2024-07-02T00:17:20.562232574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"71302626372ba3e362155714f52221aa74f630c7712a31ee8bc1554fbabd9b2d\"" Jul 2 00:17:20.564368 containerd[1452]: time="2024-07-02T00:17:20.564275788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b69046208d6231cffc00ec63bd1456d5bc2995571d186f51cc120703180f2480\"" Jul 2 00:17:20.564639 kubelet[2216]: E0702 00:17:20.564604 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:20.565071 kubelet[2216]: E0702 00:17:20.564915 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:20.566382 kubelet[2216]: E0702 00:17:20.566354 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:20.570257 containerd[1452]: time="2024-07-02T00:17:20.570107619Z" level=info msg="CreateContainer within sandbox \"71302626372ba3e362155714f52221aa74f630c7712a31ee8bc1554fbabd9b2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:17:20.570341 containerd[1452]: time="2024-07-02T00:17:20.570319313Z" level=info msg="CreateContainer within sandbox \"b69046208d6231cffc00ec63bd1456d5bc2995571d186f51cc120703180f2480\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:17:20.571513 containerd[1452]: time="2024-07-02T00:17:20.571478265Z" level=info msg="CreateContainer within sandbox \"70c3bcfdbe438b4faaf2a005aa8665ab0bb5ead2e4e4ca571b119aa34ef6ccb4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:17:20.587403 kubelet[2216]: E0702 00:17:20.587358 2216 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="3.2s" Jul 2 00:17:20.605445 containerd[1452]: time="2024-07-02T00:17:20.605288705Z" level=info msg="CreateContainer within sandbox \"b69046208d6231cffc00ec63bd1456d5bc2995571d186f51cc120703180f2480\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c9fc96a95ce7a8b6989c5d918a84dc25a0dce15b714a0c243131fdbadcb10c33\"" Jul 2 00:17:20.605875 containerd[1452]: time="2024-07-02T00:17:20.605845759Z" level=info msg="StartContainer for \"c9fc96a95ce7a8b6989c5d918a84dc25a0dce15b714a0c243131fdbadcb10c33\"" Jul 2 00:17:20.609737 containerd[1452]: time="2024-07-02T00:17:20.609704956Z" level=info msg="CreateContainer within sandbox \"71302626372ba3e362155714f52221aa74f630c7712a31ee8bc1554fbabd9b2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d86ce21784bd759cb362be4f91215c22899f712428ee6e5f754248e877d0a70\"" Jul 2 00:17:20.610176 containerd[1452]: time="2024-07-02T00:17:20.610135115Z" level=info msg="StartContainer for \"1d86ce21784bd759cb362be4f91215c22899f712428ee6e5f754248e877d0a70\"" Jul 2 00:17:20.613451 containerd[1452]: time="2024-07-02T00:17:20.613420135Z" level=info msg="CreateContainer within sandbox \"70c3bcfdbe438b4faaf2a005aa8665ab0bb5ead2e4e4ca571b119aa34ef6ccb4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"591cc816299af8c1fbc22b80aca45ff37078110f760bbfccdeafe6dc19dcbd3d\"" Jul 2 00:17:20.613850 containerd[1452]: time="2024-07-02T00:17:20.613814999Z" level=info msg="StartContainer for \"591cc816299af8c1fbc22b80aca45ff37078110f760bbfccdeafe6dc19dcbd3d\"" Jul 2 00:17:20.637428 systemd[1]: Started cri-containerd-c9fc96a95ce7a8b6989c5d918a84dc25a0dce15b714a0c243131fdbadcb10c33.scope - libcontainer container c9fc96a95ce7a8b6989c5d918a84dc25a0dce15b714a0c243131fdbadcb10c33. Jul 2 00:17:20.641247 systemd[1]: Started cri-containerd-1d86ce21784bd759cb362be4f91215c22899f712428ee6e5f754248e877d0a70.scope - libcontainer container 1d86ce21784bd759cb362be4f91215c22899f712428ee6e5f754248e877d0a70. Jul 2 00:17:20.643422 systemd[1]: Started cri-containerd-591cc816299af8c1fbc22b80aca45ff37078110f760bbfccdeafe6dc19dcbd3d.scope - libcontainer container 591cc816299af8c1fbc22b80aca45ff37078110f760bbfccdeafe6dc19dcbd3d. Jul 2 00:17:20.683222 kubelet[2216]: I0702 00:17:20.683086 2216 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:17:20.684303 kubelet[2216]: E0702 00:17:20.684214 2216 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 2 00:17:20.892653 containerd[1452]: time="2024-07-02T00:17:20.892501824Z" level=info msg="StartContainer for \"1d86ce21784bd759cb362be4f91215c22899f712428ee6e5f754248e877d0a70\" returns successfully" Jul 2 00:17:20.892653 containerd[1452]: time="2024-07-02T00:17:20.892543542Z" level=info msg="StartContainer for \"591cc816299af8c1fbc22b80aca45ff37078110f760bbfccdeafe6dc19dcbd3d\" returns successfully" Jul 2 00:17:20.892653 containerd[1452]: time="2024-07-02T00:17:20.892499430Z" level=info msg="StartContainer for \"c9fc96a95ce7a8b6989c5d918a84dc25a0dce15b714a0c243131fdbadcb10c33\" returns successfully" Jul 2 00:17:21.564511 kubelet[2216]: I0702 00:17:21.564181 2216 apiserver.go:52] "Watching apiserver" Jul 2 00:17:21.572940 kubelet[2216]: I0702 00:17:21.572895 2216 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:17:21.613133 kubelet[2216]: E0702 00:17:21.613080 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:21.614418 kubelet[2216]: E0702 00:17:21.614393 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:21.616255 kubelet[2216]: E0702 00:17:21.616223 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:21.927593 kubelet[2216]: E0702 00:17:21.927457 2216 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:17:22.273157 kubelet[2216]: E0702 00:17:22.273122 2216 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:17:22.618446 kubelet[2216]: E0702 00:17:22.618164 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:22.618446 kubelet[2216]: E0702 00:17:22.618422 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:22.619112 kubelet[2216]: E0702 00:17:22.618422 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:22.704226 kubelet[2216]: E0702 00:17:22.704170 2216 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:17:23.615949 kubelet[2216]: E0702 00:17:23.615917 2216 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:17:23.619025 kubelet[2216]: E0702 00:17:23.619003 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:23.619333 update_engine[1438]: I0702 00:17:23.619122 1438 update_attempter.cc:509] Updating boot flags... Jul 2 00:17:23.692333 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2500) Jul 2 00:17:23.725402 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2504) Jul 2 00:17:23.753349 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2504) Jul 2 00:17:23.829740 kubelet[2216]: E0702 00:17:23.829693 2216 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:17:23.886082 kubelet[2216]: I0702 00:17:23.885940 2216 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:17:23.891781 kubelet[2216]: I0702 00:17:23.891728 2216 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:17:24.621780 systemd[1]: Reloading requested from client PID 2510 ('systemctl') (unit session-9.scope)... Jul 2 00:17:24.621796 systemd[1]: Reloading... Jul 2 00:17:24.701396 zram_generator::config[2547]: No configuration found. Jul 2 00:17:24.811723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:17:24.904096 systemd[1]: Reloading finished in 281 ms. Jul 2 00:17:24.953665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:24.972962 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:17:24.973230 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:24.973286 systemd[1]: kubelet.service: Consumed 1.059s CPU time, 117.6M memory peak, 0B memory swap peak. Jul 2 00:17:24.985762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:17:25.134960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:17:25.140219 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:17:25.189119 kubelet[2592]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:25.189119 kubelet[2592]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:17:25.189119 kubelet[2592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:17:25.189119 kubelet[2592]: I0702 00:17:25.189063 2592 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:17:25.193870 kubelet[2592]: I0702 00:17:25.193844 2592 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:17:25.193870 kubelet[2592]: I0702 00:17:25.193869 2592 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:17:25.194085 kubelet[2592]: I0702 00:17:25.194063 2592 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:17:25.195931 kubelet[2592]: I0702 00:17:25.195913 2592 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:17:25.199732 kubelet[2592]: I0702 00:17:25.199700 2592 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:17:25.209119 kubelet[2592]: I0702 00:17:25.209079 2592 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:17:25.209461 kubelet[2592]: I0702 00:17:25.209431 2592 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:17:25.209700 kubelet[2592]: I0702 00:17:25.209674 2592 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:17:25.209826 kubelet[2592]: I0702 00:17:25.209712 2592 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:17:25.209826 kubelet[2592]: I0702 00:17:25.209726 2592 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:17:25.209826 kubelet[2592]: I0702 00:17:25.209768 2592 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:17:25.209931 kubelet[2592]: I0702 00:17:25.209886 2592 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:17:25.209931 kubelet[2592]: I0702 00:17:25.209902 2592 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:17:25.209996 kubelet[2592]: I0702 00:17:25.209936 2592 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:17:25.209996 kubelet[2592]: I0702 00:17:25.209961 2592 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:17:25.213144 kubelet[2592]: I0702 00:17:25.210587 2592 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:17:25.213144 kubelet[2592]: I0702 00:17:25.210810 2592 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:17:25.213144 kubelet[2592]: I0702 00:17:25.211266 2592 server.go:1256] "Started kubelet" Jul 2 00:17:25.213144 kubelet[2592]: I0702 00:17:25.212385 2592 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:17:25.213727 kubelet[2592]: I0702 00:17:25.213507 2592 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:17:25.221899 kubelet[2592]: I0702 00:17:25.215733 2592 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:17:25.221899 kubelet[2592]: I0702 00:17:25.212422 2592 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:17:25.221899 kubelet[2592]: I0702 00:17:25.219215 2592 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:17:25.222383 kubelet[2592]: E0702 00:17:25.222347 2592 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:17:25.222452 kubelet[2592]: I0702 00:17:25.222421 2592 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:17:25.224210 kubelet[2592]: I0702 00:17:25.222530 2592 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:17:25.224354 kubelet[2592]: I0702 00:17:25.224326 2592 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:17:25.225174 kubelet[2592]: E0702 00:17:25.225138 2592 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:17:25.230513 kubelet[2592]: I0702 00:17:25.230460 2592 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:17:25.230513 kubelet[2592]: I0702 00:17:25.230483 2592 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:17:25.230684 kubelet[2592]: I0702 00:17:25.230583 2592 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:17:25.240796 kubelet[2592]: I0702 00:17:25.240749 2592 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:17:25.242652 sudo[2618]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:17:25.242961 sudo[2618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:17:25.245391 kubelet[2592]: I0702 00:17:25.243644 2592 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:17:25.245391 kubelet[2592]: I0702 00:17:25.243683 2592 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:17:25.245391 kubelet[2592]: I0702 00:17:25.243715 2592 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:17:25.245391 kubelet[2592]: E0702 00:17:25.243779 2592 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:17:25.278147 kubelet[2592]: I0702 00:17:25.277845 2592 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:17:25.278147 kubelet[2592]: I0702 00:17:25.277872 2592 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:17:25.278147 kubelet[2592]: I0702 00:17:25.277890 2592 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:17:25.278147 kubelet[2592]: I0702 00:17:25.278052 2592 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:17:25.278147 kubelet[2592]: I0702 00:17:25.278073 2592 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:17:25.278147 kubelet[2592]: I0702 00:17:25.278082 2592 policy_none.go:49] "None policy: Start" Jul 2 00:17:25.279802 kubelet[2592]: I0702 00:17:25.278921 2592 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:17:25.279802 kubelet[2592]: I0702 00:17:25.278946 2592 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:17:25.279802 kubelet[2592]: I0702 00:17:25.279106 2592 state_mem.go:75] "Updated machine memory state" Jul 2 00:17:25.284158 kubelet[2592]: I0702 00:17:25.284124 2592 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:17:25.284990 kubelet[2592]: I0702 00:17:25.284964 2592 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:17:25.328321 kubelet[2592]: I0702 00:17:25.328254 2592 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:17:25.336394 kubelet[2592]: I0702 00:17:25.336353 2592 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 00:17:25.336540 kubelet[2592]: I0702 00:17:25.336452 2592 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:17:25.344625 kubelet[2592]: I0702 00:17:25.344568 2592 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:17:25.345197 kubelet[2592]: I0702 00:17:25.345152 2592 topology_manager.go:215] "Topology Admit Handler" podUID="54ffb28a53cc3bbce191b18321a2f79f" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:17:25.345522 kubelet[2592]: I0702 00:17:25.345260 2592 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:17:25.426062 kubelet[2592]: I0702 00:17:25.426013 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:25.426062 kubelet[2592]: I0702 00:17:25.426064 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54ffb28a53cc3bbce191b18321a2f79f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"54ffb28a53cc3bbce191b18321a2f79f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:17:25.426254 kubelet[2592]: I0702 00:17:25.426085 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54ffb28a53cc3bbce191b18321a2f79f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"54ffb28a53cc3bbce191b18321a2f79f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:17:25.426254 kubelet[2592]: I0702 00:17:25.426107 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54ffb28a53cc3bbce191b18321a2f79f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"54ffb28a53cc3bbce191b18321a2f79f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:17:25.426254 kubelet[2592]: I0702 00:17:25.426128 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:25.426254 kubelet[2592]: I0702 00:17:25.426208 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:25.426254 kubelet[2592]: I0702 00:17:25.426247 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:25.426411 kubelet[2592]: I0702 00:17:25.426268 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:17:25.426411 kubelet[2592]: I0702 00:17:25.426306 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:17:25.654513 kubelet[2592]: E0702 00:17:25.654466 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:25.655096 kubelet[2592]: E0702 00:17:25.654997 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:25.655096 kubelet[2592]: E0702 00:17:25.655055 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:25.744536 sudo[2618]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:26.211422 kubelet[2592]: I0702 00:17:26.211348 2592 apiserver.go:52] "Watching apiserver" Jul 2 00:17:26.223121 kubelet[2592]: I0702 00:17:26.223061 2592 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:17:26.257231 kubelet[2592]: E0702 00:17:26.257193 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:26.258055 kubelet[2592]: E0702 00:17:26.257927 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:26.258055 kubelet[2592]: E0702 00:17:26.258048 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:26.303959 kubelet[2592]: I0702 00:17:26.303400 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.303328454 podStartE2EDuration="1.303328454s" podCreationTimestamp="2024-07-02 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:26.30309035 +0000 UTC m=+1.158812902" watchObservedRunningTime="2024-07-02 00:17:26.303328454 +0000 UTC m=+1.159051006" Jul 2 00:17:26.303959 kubelet[2592]: I0702 00:17:26.303518 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.303499613 podStartE2EDuration="1.303499613s" podCreationTimestamp="2024-07-02 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:26.293870191 +0000 UTC m=+1.149592733" watchObservedRunningTime="2024-07-02 00:17:26.303499613 +0000 UTC m=+1.159222155" Jul 2 00:17:26.316225 kubelet[2592]: I0702 00:17:26.316180 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.316142179 podStartE2EDuration="1.316142179s" podCreationTimestamp="2024-07-02 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:26.309898916 +0000 UTC m=+1.165621458" watchObservedRunningTime="2024-07-02 00:17:26.316142179 +0000 UTC m=+1.171864721" Jul 2 00:17:26.871920 sudo[1647]: pam_unix(sudo:session): session closed for user root Jul 2 00:17:26.873628 sshd[1644]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:26.877584 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:52136.service: Deactivated successfully. Jul 2 00:17:26.879533 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:17:26.879740 systemd[1]: session-9.scope: Consumed 4.749s CPU time, 142.8M memory peak, 0B memory swap peak. Jul 2 00:17:26.880189 systemd-logind[1433]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:17:26.881034 systemd-logind[1433]: Removed session 9. Jul 2 00:17:27.259985 kubelet[2592]: E0702 00:17:27.259866 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:32.264981 kubelet[2592]: E0702 00:17:32.264949 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:32.268081 kubelet[2592]: E0702 00:17:32.268058 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:34.023340 kubelet[2592]: E0702 00:17:34.023270 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:34.269892 kubelet[2592]: E0702 00:17:34.269841 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:34.400131 kubelet[2592]: E0702 00:17:34.399960 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:35.271172 kubelet[2592]: E0702 00:17:35.271135 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:36.272474 kubelet[2592]: E0702 00:17:36.272439 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:39.064284 kubelet[2592]: I0702 00:17:39.064249 2592 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:17:39.064841 kubelet[2592]: I0702 00:17:39.064795 2592 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:17:39.064890 containerd[1452]: time="2024-07-02T00:17:39.064628289Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:17:39.615784 kubelet[2592]: I0702 00:17:39.615733 2592 topology_manager.go:215] "Topology Admit Handler" podUID="f19a5d25-9c90-487d-9854-f30b6f3abd0f" podNamespace="kube-system" podName="kube-proxy-49f5t" Jul 2 00:17:39.622327 kubelet[2592]: I0702 00:17:39.619838 2592 topology_manager.go:215] "Topology Admit Handler" podUID="f49b30bb-29a7-49fa-a312-9f3044e3341a" podNamespace="kube-system" podName="cilium-rh8zf" Jul 2 00:17:39.630245 systemd[1]: Created slice kubepods-besteffort-podf19a5d25_9c90_487d_9854_f30b6f3abd0f.slice - libcontainer container kubepods-besteffort-podf19a5d25_9c90_487d_9854_f30b6f3abd0f.slice. Jul 2 00:17:39.647496 systemd[1]: Created slice kubepods-burstable-podf49b30bb_29a7_49fa_a312_9f3044e3341a.slice - libcontainer container kubepods-burstable-podf49b30bb_29a7_49fa_a312_9f3044e3341a.slice. Jul 2 00:17:39.801226 kubelet[2592]: I0702 00:17:39.801174 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f19a5d25-9c90-487d-9854-f30b6f3abd0f-kube-proxy\") pod \"kube-proxy-49f5t\" (UID: \"f19a5d25-9c90-487d-9854-f30b6f3abd0f\") " pod="kube-system/kube-proxy-49f5t" Jul 2 00:17:39.801226 kubelet[2592]: I0702 00:17:39.801216 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-etc-cni-netd\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801226 kubelet[2592]: I0702 00:17:39.801236 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-xtables-lock\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801438 kubelet[2592]: I0702 00:17:39.801255 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-run\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801438 kubelet[2592]: I0702 00:17:39.801285 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f19a5d25-9c90-487d-9854-f30b6f3abd0f-lib-modules\") pod \"kube-proxy-49f5t\" (UID: \"f19a5d25-9c90-487d-9854-f30b6f3abd0f\") " pod="kube-system/kube-proxy-49f5t" Jul 2 00:17:39.801438 kubelet[2592]: I0702 00:17:39.801317 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-hostproc\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801438 kubelet[2592]: I0702 00:17:39.801335 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-host-proc-sys-net\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801438 kubelet[2592]: I0702 00:17:39.801356 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-host-proc-sys-kernel\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801438 kubelet[2592]: I0702 00:17:39.801374 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-cgroup\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801578 kubelet[2592]: I0702 00:17:39.801391 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cni-path\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801578 kubelet[2592]: I0702 00:17:39.801408 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-config-path\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801578 kubelet[2592]: I0702 00:17:39.801431 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f49b30bb-29a7-49fa-a312-9f3044e3341a-hubble-tls\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801578 kubelet[2592]: I0702 00:17:39.801456 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f19a5d25-9c90-487d-9854-f30b6f3abd0f-xtables-lock\") pod \"kube-proxy-49f5t\" (UID: \"f19a5d25-9c90-487d-9854-f30b6f3abd0f\") " pod="kube-system/kube-proxy-49f5t" Jul 2 00:17:39.801578 kubelet[2592]: I0702 00:17:39.801480 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-bpf-maps\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801578 kubelet[2592]: I0702 00:17:39.801503 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45np\" (UniqueName: \"kubernetes.io/projected/f49b30bb-29a7-49fa-a312-9f3044e3341a-kube-api-access-h45np\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801776 kubelet[2592]: I0702 00:17:39.801528 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj97\" (UniqueName: \"kubernetes.io/projected/f19a5d25-9c90-487d-9854-f30b6f3abd0f-kube-api-access-rnj97\") pod \"kube-proxy-49f5t\" (UID: \"f19a5d25-9c90-487d-9854-f30b6f3abd0f\") " pod="kube-system/kube-proxy-49f5t" Jul 2 00:17:39.801776 kubelet[2592]: I0702 00:17:39.801550 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-lib-modules\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.801776 kubelet[2592]: I0702 00:17:39.801575 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f49b30bb-29a7-49fa-a312-9f3044e3341a-clustermesh-secrets\") pod \"cilium-rh8zf\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " pod="kube-system/cilium-rh8zf" Jul 2 00:17:39.942797 kubelet[2592]: E0702 00:17:39.942687 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:39.943861 containerd[1452]: time="2024-07-02T00:17:39.943448012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49f5t,Uid:f19a5d25-9c90-487d-9854-f30b6f3abd0f,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:39.951204 kubelet[2592]: E0702 00:17:39.951147 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:39.951799 containerd[1452]: time="2024-07-02T00:17:39.951676253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rh8zf,Uid:f49b30bb-29a7-49fa-a312-9f3044e3341a,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:39.979787 containerd[1452]: time="2024-07-02T00:17:39.979383620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:39.979787 containerd[1452]: time="2024-07-02T00:17:39.979458370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:39.979787 containerd[1452]: time="2024-07-02T00:17:39.979480491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:39.979787 containerd[1452]: time="2024-07-02T00:17:39.979496050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:39.988390 containerd[1452]: time="2024-07-02T00:17:39.987632590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:39.988390 containerd[1452]: time="2024-07-02T00:17:39.987683825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:39.988390 containerd[1452]: time="2024-07-02T00:17:39.987716727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:39.988390 containerd[1452]: time="2024-07-02T00:17:39.987731655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:40.008661 systemd[1]: Started cri-containerd-5009eed0e1cbaef35e8de01f15bd6e2255b5045c80bcf561914973082ff121fa.scope - libcontainer container 5009eed0e1cbaef35e8de01f15bd6e2255b5045c80bcf561914973082ff121fa. Jul 2 00:17:40.013588 systemd[1]: Started cri-containerd-510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b.scope - libcontainer container 510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b. Jul 2 00:17:40.041877 containerd[1452]: time="2024-07-02T00:17:40.041830472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49f5t,Uid:f19a5d25-9c90-487d-9854-f30b6f3abd0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5009eed0e1cbaef35e8de01f15bd6e2255b5045c80bcf561914973082ff121fa\"" Jul 2 00:17:40.042734 kubelet[2592]: E0702 00:17:40.042642 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:40.047862 containerd[1452]: time="2024-07-02T00:17:40.046256272Z" level=info msg="CreateContainer within sandbox \"5009eed0e1cbaef35e8de01f15bd6e2255b5045c80bcf561914973082ff121fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:17:40.050496 containerd[1452]: time="2024-07-02T00:17:40.050432715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rh8zf,Uid:f49b30bb-29a7-49fa-a312-9f3044e3341a,Namespace:kube-system,Attempt:0,} returns sandbox id \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\"" Jul 2 00:17:40.055633 kubelet[2592]: E0702 00:17:40.055587 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:40.058067 containerd[1452]: time="2024-07-02T00:17:40.057916406Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:17:40.080978 containerd[1452]: time="2024-07-02T00:17:40.080892121Z" level=info msg="CreateContainer within sandbox \"5009eed0e1cbaef35e8de01f15bd6e2255b5045c80bcf561914973082ff121fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96a6fa6390a43fe0aef9f663d1eccf6a878fb39275430df88b3f8e2d58f40b0e\"" Jul 2 00:17:40.081858 containerd[1452]: time="2024-07-02T00:17:40.081832249Z" level=info msg="StartContainer for \"96a6fa6390a43fe0aef9f663d1eccf6a878fb39275430df88b3f8e2d58f40b0e\"" Jul 2 00:17:40.118507 systemd[1]: Started cri-containerd-96a6fa6390a43fe0aef9f663d1eccf6a878fb39275430df88b3f8e2d58f40b0e.scope - libcontainer container 96a6fa6390a43fe0aef9f663d1eccf6a878fb39275430df88b3f8e2d58f40b0e. Jul 2 00:17:40.150858 containerd[1452]: time="2024-07-02T00:17:40.150800399Z" level=info msg="StartContainer for \"96a6fa6390a43fe0aef9f663d1eccf6a878fb39275430df88b3f8e2d58f40b0e\" returns successfully" Jul 2 00:17:40.183916 kubelet[2592]: I0702 00:17:40.183069 2592 topology_manager.go:215] "Topology Admit Handler" podUID="f09bc6a0-f822-4dc8-8a26-ece77b49ccd3" podNamespace="kube-system" podName="cilium-operator-5cc964979-dr7zk" Jul 2 00:17:40.195588 systemd[1]: Created slice kubepods-besteffort-podf09bc6a0_f822_4dc8_8a26_ece77b49ccd3.slice - libcontainer container kubepods-besteffort-podf09bc6a0_f822_4dc8_8a26_ece77b49ccd3.slice. Jul 2 00:17:40.280719 kubelet[2592]: E0702 00:17:40.280688 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:40.305271 kubelet[2592]: I0702 00:17:40.305217 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z66sx\" (UniqueName: \"kubernetes.io/projected/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3-kube-api-access-z66sx\") pod \"cilium-operator-5cc964979-dr7zk\" (UID: \"f09bc6a0-f822-4dc8-8a26-ece77b49ccd3\") " pod="kube-system/cilium-operator-5cc964979-dr7zk" Jul 2 00:17:40.305271 kubelet[2592]: I0702 00:17:40.305271 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3-cilium-config-path\") pod \"cilium-operator-5cc964979-dr7zk\" (UID: \"f09bc6a0-f822-4dc8-8a26-ece77b49ccd3\") " pod="kube-system/cilium-operator-5cc964979-dr7zk" Jul 2 00:17:40.500931 kubelet[2592]: E0702 00:17:40.500791 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:40.501412 containerd[1452]: time="2024-07-02T00:17:40.501345409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dr7zk,Uid:f09bc6a0-f822-4dc8-8a26-ece77b49ccd3,Namespace:kube-system,Attempt:0,}" Jul 2 00:17:40.530183 containerd[1452]: time="2024-07-02T00:17:40.529691921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:17:40.530183 containerd[1452]: time="2024-07-02T00:17:40.530143517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:40.530183 containerd[1452]: time="2024-07-02T00:17:40.530164256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:17:40.530183 containerd[1452]: time="2024-07-02T00:17:40.530175988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:17:40.553574 systemd[1]: Started cri-containerd-3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443.scope - libcontainer container 3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443. Jul 2 00:17:40.594382 containerd[1452]: time="2024-07-02T00:17:40.594289304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dr7zk,Uid:f09bc6a0-f822-4dc8-8a26-ece77b49ccd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\"" Jul 2 00:17:40.595256 kubelet[2592]: E0702 00:17:40.595229 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:48.510098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765397147.mount: Deactivated successfully. Jul 2 00:17:54.926128 containerd[1452]: time="2024-07-02T00:17:54.926031122Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:55.014869 containerd[1452]: time="2024-07-02T00:17:55.014777029Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735351" Jul 2 00:17:55.099890 containerd[1452]: time="2024-07-02T00:17:55.099844730Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:55.101529 containerd[1452]: time="2024-07-02T00:17:55.101500814Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.043525136s" Jul 2 00:17:55.101529 containerd[1452]: time="2024-07-02T00:17:55.101530791Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:17:55.102203 containerd[1452]: time="2024-07-02T00:17:55.101953358Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:17:55.102886 containerd[1452]: time="2024-07-02T00:17:55.102858658Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:17:55.162166 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:59508.service - OpenSSH per-connection server daemon (10.0.0.1:59508). Jul 2 00:17:55.462159 sshd[2984]: Accepted publickey for core from 10.0.0.1 port 59508 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:17:55.463757 sshd[2984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:17:55.481525 systemd-logind[1433]: New session 10 of user core. Jul 2 00:17:55.490535 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:17:55.606284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858027017.mount: Deactivated successfully. Jul 2 00:17:55.694405 sshd[2984]: pam_unix(sshd:session): session closed for user core Jul 2 00:17:55.698971 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:59508.service: Deactivated successfully. Jul 2 00:17:55.700798 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:17:55.701394 systemd-logind[1433]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:17:55.702396 systemd-logind[1433]: Removed session 10. Jul 2 00:17:55.780626 containerd[1452]: time="2024-07-02T00:17:55.780561368Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c\"" Jul 2 00:17:55.781521 containerd[1452]: time="2024-07-02T00:17:55.781465295Z" level=info msg="StartContainer for \"3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c\"" Jul 2 00:17:55.820483 systemd[1]: Started cri-containerd-3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c.scope - libcontainer container 3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c. Jul 2 00:17:55.926259 systemd[1]: cri-containerd-3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c.scope: Deactivated successfully. Jul 2 00:17:55.951830 containerd[1452]: time="2024-07-02T00:17:55.951739684Z" level=info msg="StartContainer for \"3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c\" returns successfully" Jul 2 00:17:56.310864 kubelet[2592]: E0702 00:17:56.310834 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:56.603286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c-rootfs.mount: Deactivated successfully. Jul 2 00:17:56.620488 kubelet[2592]: I0702 00:17:56.620445 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-49f5t" podStartSLOduration=17.619049428 podStartE2EDuration="17.619049428s" podCreationTimestamp="2024-07-02 00:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:17:40.288729916 +0000 UTC m=+15.144452468" watchObservedRunningTime="2024-07-02 00:17:56.619049428 +0000 UTC m=+31.474771970" Jul 2 00:17:57.312627 kubelet[2592]: E0702 00:17:57.312583 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:57.495176 containerd[1452]: time="2024-07-02T00:17:57.495082581Z" level=info msg="shim disconnected" id=3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c namespace=k8s.io Jul 2 00:17:57.495176 containerd[1452]: time="2024-07-02T00:17:57.495169987Z" level=warning msg="cleaning up after shim disconnected" id=3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c namespace=k8s.io Jul 2 00:17:57.495176 containerd[1452]: time="2024-07-02T00:17:57.495197199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:17:58.303202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559776095.mount: Deactivated successfully. Jul 2 00:17:58.315953 kubelet[2592]: E0702 00:17:58.315668 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:58.319037 containerd[1452]: time="2024-07-02T00:17:58.318996577Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:17:58.336859 containerd[1452]: time="2024-07-02T00:17:58.336798653Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad\"" Jul 2 00:17:58.337506 containerd[1452]: time="2024-07-02T00:17:58.337483990Z" level=info msg="StartContainer for \"04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad\"" Jul 2 00:17:58.376524 systemd[1]: Started cri-containerd-04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad.scope - libcontainer container 04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad. Jul 2 00:17:58.408702 containerd[1452]: time="2024-07-02T00:17:58.408607653Z" level=info msg="StartContainer for \"04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad\" returns successfully" Jul 2 00:17:58.420408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:17:58.420642 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:17:58.420711 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:17:58.428070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:17:58.428405 systemd[1]: cri-containerd-04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad.scope: Deactivated successfully. Jul 2 00:17:58.496398 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:17:58.763872 containerd[1452]: time="2024-07-02T00:17:58.763141127Z" level=info msg="shim disconnected" id=04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad namespace=k8s.io Jul 2 00:17:58.763872 containerd[1452]: time="2024-07-02T00:17:58.763204538Z" level=warning msg="cleaning up after shim disconnected" id=04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad namespace=k8s.io Jul 2 00:17:58.768820 containerd[1452]: time="2024-07-02T00:17:58.763215709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:17:58.884377 containerd[1452]: time="2024-07-02T00:17:58.883769216Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:58.888617 containerd[1452]: time="2024-07-02T00:17:58.888518044Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907245" Jul 2 00:17:58.890531 containerd[1452]: time="2024-07-02T00:17:58.890415714Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:17:58.891902 containerd[1452]: time="2024-07-02T00:17:58.891705704Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.78971787s" Jul 2 00:17:58.891902 containerd[1452]: time="2024-07-02T00:17:58.891757282Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:17:58.894558 containerd[1452]: time="2024-07-02T00:17:58.894486878Z" level=info msg="CreateContainer within sandbox \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:17:58.921118 containerd[1452]: time="2024-07-02T00:17:58.921043894Z" level=info msg="CreateContainer within sandbox \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\"" Jul 2 00:17:58.922390 containerd[1452]: time="2024-07-02T00:17:58.922341479Z" level=info msg="StartContainer for \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\"" Jul 2 00:17:58.958604 systemd[1]: Started cri-containerd-e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400.scope - libcontainer container e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400. Jul 2 00:17:59.057390 containerd[1452]: time="2024-07-02T00:17:59.057168345Z" level=info msg="StartContainer for \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\" returns successfully" Jul 2 00:17:59.300897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad-rootfs.mount: Deactivated successfully. Jul 2 00:17:59.319259 kubelet[2592]: E0702 00:17:59.319125 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:17:59.324273 containerd[1452]: time="2024-07-02T00:17:59.324210808Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:17:59.327863 kubelet[2592]: E0702 00:17:59.327828 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:00.251097 containerd[1452]: time="2024-07-02T00:18:00.251028535Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c\"" Jul 2 00:18:00.251875 containerd[1452]: time="2024-07-02T00:18:00.251760821Z" level=info msg="StartContainer for \"deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c\"" Jul 2 00:18:00.292483 systemd[1]: Started cri-containerd-deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c.scope - libcontainer container deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c. Jul 2 00:18:00.324630 systemd[1]: cri-containerd-deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c.scope: Deactivated successfully. Jul 2 00:18:00.346465 containerd[1452]: time="2024-07-02T00:18:00.346400296Z" level=info msg="StartContainer for \"deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c\" returns successfully" Jul 2 00:18:00.351210 kubelet[2592]: E0702 00:18:00.351168 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:00.367994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c-rootfs.mount: Deactivated successfully. Jul 2 00:18:00.418579 containerd[1452]: time="2024-07-02T00:18:00.418485046Z" level=info msg="shim disconnected" id=deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c namespace=k8s.io Jul 2 00:18:00.418579 containerd[1452]: time="2024-07-02T00:18:00.418569637Z" level=warning msg="cleaning up after shim disconnected" id=deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c namespace=k8s.io Jul 2 00:18:00.418579 containerd[1452]: time="2024-07-02T00:18:00.418583314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:00.707701 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:54842.service - OpenSSH per-connection server daemon (10.0.0.1:54842). Jul 2 00:18:00.758942 sshd[3236]: Accepted publickey for core from 10.0.0.1 port 54842 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:00.760847 sshd[3236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:00.765859 systemd-logind[1433]: New session 11 of user core. Jul 2 00:18:00.777550 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:18:00.976406 sshd[3236]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:00.980315 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:54842.service: Deactivated successfully. Jul 2 00:18:00.982256 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:18:00.983124 systemd-logind[1433]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:18:00.984019 systemd-logind[1433]: Removed session 11. Jul 2 00:18:01.354792 kubelet[2592]: E0702 00:18:01.354755 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:01.357644 containerd[1452]: time="2024-07-02T00:18:01.357596845Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:18:01.374854 kubelet[2592]: I0702 00:18:01.374809 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-dr7zk" podStartSLOduration=3.079895374 podStartE2EDuration="21.374763439s" podCreationTimestamp="2024-07-02 00:17:40 +0000 UTC" firstStartedPulling="2024-07-02 00:17:40.597486155 +0000 UTC m=+15.453208697" lastFinishedPulling="2024-07-02 00:17:58.8923542 +0000 UTC m=+33.748076762" observedRunningTime="2024-07-02 00:17:59.800816206 +0000 UTC m=+34.656538749" watchObservedRunningTime="2024-07-02 00:18:01.374763439 +0000 UTC m=+36.230485991" Jul 2 00:18:01.995938 containerd[1452]: time="2024-07-02T00:18:01.995859317Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8\"" Jul 2 00:18:01.996461 containerd[1452]: time="2024-07-02T00:18:01.996437229Z" level=info msg="StartContainer for \"a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8\"" Jul 2 00:18:02.031191 systemd[1]: run-containerd-runc-k8s.io-a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8-runc.euDAw1.mount: Deactivated successfully. Jul 2 00:18:02.046453 systemd[1]: Started cri-containerd-a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8.scope - libcontainer container a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8. Jul 2 00:18:02.079176 systemd[1]: cri-containerd-a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8.scope: Deactivated successfully. Jul 2 00:18:02.081241 containerd[1452]: time="2024-07-02T00:18:02.081192651Z" level=info msg="StartContainer for \"a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8\" returns successfully" Jul 2 00:18:02.359347 kubelet[2592]: E0702 00:18:02.358817 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:02.387376 containerd[1452]: time="2024-07-02T00:18:02.387274996Z" level=info msg="shim disconnected" id=a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8 namespace=k8s.io Jul 2 00:18:02.387376 containerd[1452]: time="2024-07-02T00:18:02.387371971Z" level=warning msg="cleaning up after shim disconnected" id=a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8 namespace=k8s.io Jul 2 00:18:02.387898 containerd[1452]: time="2024-07-02T00:18:02.387386640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:18:02.881054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8-rootfs.mount: Deactivated successfully. Jul 2 00:18:03.362426 kubelet[2592]: E0702 00:18:03.362398 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:03.364155 containerd[1452]: time="2024-07-02T00:18:03.364109179Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:18:03.600393 containerd[1452]: time="2024-07-02T00:18:03.600340095Z" level=info msg="CreateContainer within sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\"" Jul 2 00:18:03.600997 containerd[1452]: time="2024-07-02T00:18:03.600964273Z" level=info msg="StartContainer for \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\"" Jul 2 00:18:03.632470 systemd[1]: Started cri-containerd-0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed.scope - libcontainer container 0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed. Jul 2 00:18:03.672390 containerd[1452]: time="2024-07-02T00:18:03.672286270Z" level=info msg="StartContainer for \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\" returns successfully" Jul 2 00:18:03.834834 kubelet[2592]: I0702 00:18:03.833350 2592 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:18:04.048021 kubelet[2592]: I0702 00:18:04.047955 2592 topology_manager.go:215] "Topology Admit Handler" podUID="c818689f-1fab-4aa0-8f98-c18c2de25d5e" podNamespace="kube-system" podName="coredns-76f75df574-t472v" Jul 2 00:18:04.048243 kubelet[2592]: I0702 00:18:04.048122 2592 topology_manager.go:215] "Topology Admit Handler" podUID="32fe4740-fbcd-4f4d-909f-fa6da59ca3e6" podNamespace="kube-system" podName="coredns-76f75df574-7h2x5" Jul 2 00:18:04.065273 systemd[1]: Created slice kubepods-burstable-pod32fe4740_fbcd_4f4d_909f_fa6da59ca3e6.slice - libcontainer container kubepods-burstable-pod32fe4740_fbcd_4f4d_909f_fa6da59ca3e6.slice. Jul 2 00:18:04.073603 systemd[1]: Created slice kubepods-burstable-podc818689f_1fab_4aa0_8f98_c18c2de25d5e.slice - libcontainer container kubepods-burstable-podc818689f_1fab_4aa0_8f98_c18c2de25d5e.slice. Jul 2 00:18:04.104389 kubelet[2592]: I0702 00:18:04.104337 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32fe4740-fbcd-4f4d-909f-fa6da59ca3e6-config-volume\") pod \"coredns-76f75df574-7h2x5\" (UID: \"32fe4740-fbcd-4f4d-909f-fa6da59ca3e6\") " pod="kube-system/coredns-76f75df574-7h2x5" Jul 2 00:18:04.104389 kubelet[2592]: I0702 00:18:04.104403 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmdq\" (UniqueName: \"kubernetes.io/projected/32fe4740-fbcd-4f4d-909f-fa6da59ca3e6-kube-api-access-jtmdq\") pod \"coredns-76f75df574-7h2x5\" (UID: \"32fe4740-fbcd-4f4d-909f-fa6da59ca3e6\") " pod="kube-system/coredns-76f75df574-7h2x5" Jul 2 00:18:04.104616 kubelet[2592]: I0702 00:18:04.104433 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8fgp\" (UniqueName: \"kubernetes.io/projected/c818689f-1fab-4aa0-8f98-c18c2de25d5e-kube-api-access-q8fgp\") pod \"coredns-76f75df574-t472v\" (UID: \"c818689f-1fab-4aa0-8f98-c18c2de25d5e\") " pod="kube-system/coredns-76f75df574-t472v" Jul 2 00:18:04.104727 kubelet[2592]: I0702 00:18:04.104677 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c818689f-1fab-4aa0-8f98-c18c2de25d5e-config-volume\") pod \"coredns-76f75df574-t472v\" (UID: \"c818689f-1fab-4aa0-8f98-c18c2de25d5e\") " pod="kube-system/coredns-76f75df574-t472v" Jul 2 00:18:04.366864 kubelet[2592]: E0702 00:18:04.366719 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:04.369680 kubelet[2592]: E0702 00:18:04.369482 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:04.373121 containerd[1452]: time="2024-07-02T00:18:04.373030611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7h2x5,Uid:32fe4740-fbcd-4f4d-909f-fa6da59ca3e6,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:04.376869 kubelet[2592]: E0702 00:18:04.376826 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:04.377489 containerd[1452]: time="2024-07-02T00:18:04.377436765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t472v,Uid:c818689f-1fab-4aa0-8f98-c18c2de25d5e,Namespace:kube-system,Attempt:0,}" Jul 2 00:18:04.412363 kubelet[2592]: I0702 00:18:04.412269 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rh8zf" podStartSLOduration=10.367495732 podStartE2EDuration="25.412210115s" podCreationTimestamp="2024-07-02 00:17:39 +0000 UTC" firstStartedPulling="2024-07-02 00:17:40.057071876 +0000 UTC m=+14.912794418" lastFinishedPulling="2024-07-02 00:17:55.101786259 +0000 UTC m=+29.957508801" observedRunningTime="2024-07-02 00:18:04.402813384 +0000 UTC m=+39.258535936" watchObservedRunningTime="2024-07-02 00:18:04.412210115 +0000 UTC m=+39.267932677" Jul 2 00:18:05.371622 kubelet[2592]: E0702 00:18:05.371572 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:05.919481 systemd-networkd[1388]: cilium_host: Link UP Jul 2 00:18:05.921518 systemd-networkd[1388]: cilium_net: Link UP Jul 2 00:18:05.921834 systemd-networkd[1388]: cilium_net: Gained carrier Jul 2 00:18:05.922023 systemd-networkd[1388]: cilium_host: Gained carrier Jul 2 00:18:05.922185 systemd-networkd[1388]: cilium_net: Gained IPv6LL Jul 2 00:18:05.922410 systemd-networkd[1388]: cilium_host: Gained IPv6LL Jul 2 00:18:05.989508 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:54854.service - OpenSSH per-connection server daemon (10.0.0.1:54854). Jul 2 00:18:06.043438 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 54854 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:06.045801 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:06.047657 systemd-networkd[1388]: cilium_vxlan: Link UP Jul 2 00:18:06.047674 systemd-networkd[1388]: cilium_vxlan: Gained carrier Jul 2 00:18:06.054830 systemd-logind[1433]: New session 12 of user core. Jul 2 00:18:06.064592 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:18:06.228958 sshd[3503]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:06.233557 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:54854.service: Deactivated successfully. Jul 2 00:18:06.235359 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:18:06.236079 systemd-logind[1433]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:18:06.237110 systemd-logind[1433]: Removed session 12. Jul 2 00:18:06.302325 kernel: NET: Registered PF_ALG protocol family Jul 2 00:18:06.372749 kubelet[2592]: E0702 00:18:06.372721 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:07.046428 systemd-networkd[1388]: lxc_health: Link UP Jul 2 00:18:07.059272 systemd-networkd[1388]: lxc_health: Gained carrier Jul 2 00:18:07.496997 systemd-networkd[1388]: lxc8e6169c7f957: Link UP Jul 2 00:18:07.504332 kernel: eth0: renamed from tmp40d43 Jul 2 00:18:07.511398 systemd-networkd[1388]: lxc843921a48265: Link UP Jul 2 00:18:07.523347 kernel: eth0: renamed from tmp66848 Jul 2 00:18:07.527523 systemd-networkd[1388]: lxc8e6169c7f957: Gained carrier Jul 2 00:18:07.536566 systemd-networkd[1388]: lxc843921a48265: Gained carrier Jul 2 00:18:07.856315 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL Jul 2 00:18:07.953361 kubelet[2592]: E0702 00:18:07.953327 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:08.377134 kubelet[2592]: E0702 00:18:08.377096 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:08.559530 systemd-networkd[1388]: lxc_health: Gained IPv6LL Jul 2 00:18:09.071421 systemd-networkd[1388]: lxc8e6169c7f957: Gained IPv6LL Jul 2 00:18:09.327479 systemd-networkd[1388]: lxc843921a48265: Gained IPv6LL Jul 2 00:18:09.378625 kubelet[2592]: E0702 00:18:09.378597 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:11.243264 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:47002.service - OpenSSH per-connection server daemon (10.0.0.1:47002). Jul 2 00:18:11.281594 sshd[3858]: Accepted publickey for core from 10.0.0.1 port 47002 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:11.282975 sshd[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:11.286898 systemd-logind[1433]: New session 13 of user core. Jul 2 00:18:11.296647 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:18:11.361972 containerd[1452]: time="2024-07-02T00:18:11.361828539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:11.363118 containerd[1452]: time="2024-07-02T00:18:11.361979757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:11.363118 containerd[1452]: time="2024-07-02T00:18:11.362060229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:11.363118 containerd[1452]: time="2024-07-02T00:18:11.362774896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:11.391627 systemd[1]: Started cri-containerd-66848c58f6bdf9d829cecc62a7d31e3e3f124b7b46f7b56f74b40e2ee8adfa0c.scope - libcontainer container 66848c58f6bdf9d829cecc62a7d31e3e3f124b7b46f7b56f74b40e2ee8adfa0c. Jul 2 00:18:11.412157 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:18:11.412767 containerd[1452]: time="2024-07-02T00:18:11.412496167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:18:11.412767 containerd[1452]: time="2024-07-02T00:18:11.412618168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:11.414337 containerd[1452]: time="2024-07-02T00:18:11.412646903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:18:11.414337 containerd[1452]: time="2024-07-02T00:18:11.412686739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:18:11.453557 systemd[1]: Started cri-containerd-40d435659c88cd0a8dcc9244a71b7580edec5f765728d9402a1ceee199f72497.scope - libcontainer container 40d435659c88cd0a8dcc9244a71b7580edec5f765728d9402a1ceee199f72497. Jul 2 00:18:11.455361 containerd[1452]: time="2024-07-02T00:18:11.455223800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7h2x5,Uid:32fe4740-fbcd-4f4d-909f-fa6da59ca3e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"66848c58f6bdf9d829cecc62a7d31e3e3f124b7b46f7b56f74b40e2ee8adfa0c\"" Jul 2 00:18:11.459438 kubelet[2592]: E0702 00:18:11.456523 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:11.465713 sshd[3858]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:11.467907 containerd[1452]: time="2024-07-02T00:18:11.467518742Z" level=info msg="CreateContainer within sandbox \"66848c58f6bdf9d829cecc62a7d31e3e3f124b7b46f7b56f74b40e2ee8adfa0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:18:11.470800 systemd-logind[1433]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:18:11.471205 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:47002.service: Deactivated successfully. Jul 2 00:18:11.474117 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:18:11.480514 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:18:11.480913 systemd-logind[1433]: Removed session 13. Jul 2 00:18:11.509804 containerd[1452]: time="2024-07-02T00:18:11.509633692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t472v,Uid:c818689f-1fab-4aa0-8f98-c18c2de25d5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"40d435659c88cd0a8dcc9244a71b7580edec5f765728d9402a1ceee199f72497\"" Jul 2 00:18:11.510674 kubelet[2592]: E0702 00:18:11.510641 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:11.512887 containerd[1452]: time="2024-07-02T00:18:11.512806515Z" level=info msg="CreateContainer within sandbox \"40d435659c88cd0a8dcc9244a71b7580edec5f765728d9402a1ceee199f72497\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:18:12.368026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905438294.mount: Deactivated successfully. Jul 2 00:18:12.479005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871387449.mount: Deactivated successfully. Jul 2 00:18:12.591699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616315046.mount: Deactivated successfully. Jul 2 00:18:13.061534 containerd[1452]: time="2024-07-02T00:18:13.061351552Z" level=info msg="CreateContainer within sandbox \"66848c58f6bdf9d829cecc62a7d31e3e3f124b7b46f7b56f74b40e2ee8adfa0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b39c001f2128117a42a631887308a04917c80ebbf0b0d22ee02f40bf1bcedaa1\"" Jul 2 00:18:13.062851 containerd[1452]: time="2024-07-02T00:18:13.062036752Z" level=info msg="StartContainer for \"b39c001f2128117a42a631887308a04917c80ebbf0b0d22ee02f40bf1bcedaa1\"" Jul 2 00:18:13.091416 systemd[1]: Started cri-containerd-b39c001f2128117a42a631887308a04917c80ebbf0b0d22ee02f40bf1bcedaa1.scope - libcontainer container b39c001f2128117a42a631887308a04917c80ebbf0b0d22ee02f40bf1bcedaa1. Jul 2 00:18:13.600093 containerd[1452]: time="2024-07-02T00:18:13.599953868Z" level=info msg="CreateContainer within sandbox \"40d435659c88cd0a8dcc9244a71b7580edec5f765728d9402a1ceee199f72497\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdab37ea7e5ca39f1602585c66fcee2ac2f1cc560ff939b6e2f2bb181398cf68\"" Jul 2 00:18:13.600093 containerd[1452]: time="2024-07-02T00:18:13.600019862Z" level=info msg="StartContainer for \"b39c001f2128117a42a631887308a04917c80ebbf0b0d22ee02f40bf1bcedaa1\" returns successfully" Jul 2 00:18:13.604659 containerd[1452]: time="2024-07-02T00:18:13.600854737Z" level=info msg="StartContainer for \"cdab37ea7e5ca39f1602585c66fcee2ac2f1cc560ff939b6e2f2bb181398cf68\"" Jul 2 00:18:13.642474 systemd[1]: Started cri-containerd-cdab37ea7e5ca39f1602585c66fcee2ac2f1cc560ff939b6e2f2bb181398cf68.scope - libcontainer container cdab37ea7e5ca39f1602585c66fcee2ac2f1cc560ff939b6e2f2bb181398cf68. Jul 2 00:18:13.750983 containerd[1452]: time="2024-07-02T00:18:13.750909320Z" level=info msg="StartContainer for \"cdab37ea7e5ca39f1602585c66fcee2ac2f1cc560ff939b6e2f2bb181398cf68\" returns successfully" Jul 2 00:18:14.607162 kubelet[2592]: E0702 00:18:14.607106 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:14.607162 kubelet[2592]: E0702 00:18:14.607130 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:14.674802 kubelet[2592]: I0702 00:18:14.674760 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7h2x5" podStartSLOduration=34.674722575 podStartE2EDuration="34.674722575s" podCreationTimestamp="2024-07-02 00:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:14.674555759 +0000 UTC m=+49.530278301" watchObservedRunningTime="2024-07-02 00:18:14.674722575 +0000 UTC m=+49.530445107" Jul 2 00:18:14.859035 kubelet[2592]: I0702 00:18:14.858744 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-t472v" podStartSLOduration=34.858695066 podStartE2EDuration="34.858695066s" podCreationTimestamp="2024-07-02 00:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:18:14.858664568 +0000 UTC m=+49.714387130" watchObservedRunningTime="2024-07-02 00:18:14.858695066 +0000 UTC m=+49.714417608" Jul 2 00:18:15.611310 kubelet[2592]: E0702 00:18:15.608826 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:15.611310 kubelet[2592]: E0702 00:18:15.609167 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:16.478617 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:47010.service - OpenSSH per-connection server daemon (10.0.0.1:47010). Jul 2 00:18:16.524619 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 47010 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:16.526402 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:16.530610 systemd-logind[1433]: New session 14 of user core. Jul 2 00:18:16.540461 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:18:16.611193 kubelet[2592]: E0702 00:18:16.611097 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:16.611193 kubelet[2592]: E0702 00:18:16.611188 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:16.699540 sshd[4045]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:16.703807 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:47010.service: Deactivated successfully. Jul 2 00:18:16.706448 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:18:16.707187 systemd-logind[1433]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:18:16.708239 systemd-logind[1433]: Removed session 14. Jul 2 00:18:21.718823 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:53864.service - OpenSSH per-connection server daemon (10.0.0.1:53864). Jul 2 00:18:21.757832 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 53864 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:21.759837 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:21.764454 systemd-logind[1433]: New session 15 of user core. Jul 2 00:18:21.774517 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:18:21.909215 sshd[4062]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:21.924575 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:53864.service: Deactivated successfully. Jul 2 00:18:21.926969 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:18:21.929139 systemd-logind[1433]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:18:21.935765 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:53866.service - OpenSSH per-connection server daemon (10.0.0.1:53866). Jul 2 00:18:21.937088 systemd-logind[1433]: Removed session 15. Jul 2 00:18:21.977278 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 53866 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:21.979136 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:21.985390 systemd-logind[1433]: New session 16 of user core. Jul 2 00:18:21.998647 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:18:22.229568 sshd[4077]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:22.241763 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:53866.service: Deactivated successfully. Jul 2 00:18:22.243767 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:18:22.247206 systemd-logind[1433]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:18:22.261557 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:53872.service - OpenSSH per-connection server daemon (10.0.0.1:53872). Jul 2 00:18:22.262588 systemd-logind[1433]: Removed session 16. Jul 2 00:18:22.296209 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 53872 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:22.297860 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:22.302648 systemd-logind[1433]: New session 17 of user core. Jul 2 00:18:22.313472 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:18:22.433235 sshd[4089]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:22.438452 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:53872.service: Deactivated successfully. Jul 2 00:18:22.441081 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:18:22.441983 systemd-logind[1433]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:18:22.443166 systemd-logind[1433]: Removed session 17. Jul 2 00:18:27.495655 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:53884.service - OpenSSH per-connection server daemon (10.0.0.1:53884). Jul 2 00:18:27.587242 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 53884 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:27.589653 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:27.603413 systemd-logind[1433]: New session 18 of user core. Jul 2 00:18:27.624616 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:18:27.837448 sshd[4105]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:27.843026 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:53884.service: Deactivated successfully. Jul 2 00:18:27.845990 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:18:27.847091 systemd-logind[1433]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:18:27.848233 systemd-logind[1433]: Removed session 18. Jul 2 00:18:32.894396 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:57478.service - OpenSSH per-connection server daemon (10.0.0.1:57478). Jul 2 00:18:33.023638 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 57478 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:33.029964 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:33.044428 systemd-logind[1433]: New session 19 of user core. Jul 2 00:18:33.074341 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:18:33.385561 sshd[4119]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:33.389406 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:57478.service: Deactivated successfully. Jul 2 00:18:33.392861 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:18:33.395568 systemd-logind[1433]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:18:33.398100 systemd-logind[1433]: Removed session 19. Jul 2 00:18:38.397589 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:53828.service - OpenSSH per-connection server daemon (10.0.0.1:53828). Jul 2 00:18:38.443276 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 53828 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:38.445038 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:38.449323 systemd-logind[1433]: New session 20 of user core. Jul 2 00:18:38.457614 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:18:38.577902 sshd[4133]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:38.590502 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:53828.service: Deactivated successfully. Jul 2 00:18:38.592699 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:18:38.594767 systemd-logind[1433]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:18:38.599643 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:53842.service - OpenSSH per-connection server daemon (10.0.0.1:53842). Jul 2 00:18:38.600809 systemd-logind[1433]: Removed session 20. Jul 2 00:18:38.638280 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 53842 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:38.640990 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:38.648363 systemd-logind[1433]: New session 21 of user core. Jul 2 00:18:38.655591 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:18:39.143657 sshd[4147]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:39.157067 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:53842.service: Deactivated successfully. Jul 2 00:18:39.159626 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:18:39.161947 systemd-logind[1433]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:18:39.168821 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:53846.service - OpenSSH per-connection server daemon (10.0.0.1:53846). Jul 2 00:18:39.170514 systemd-logind[1433]: Removed session 21. Jul 2 00:18:39.209359 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 53846 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:39.211262 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:39.215908 systemd-logind[1433]: New session 22 of user core. Jul 2 00:18:39.226461 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:18:41.066782 sshd[4159]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:41.078161 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:53846.service: Deactivated successfully. Jul 2 00:18:41.080900 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:18:41.083520 systemd-logind[1433]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:18:41.090727 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:53856.service - OpenSSH per-connection server daemon (10.0.0.1:53856). Jul 2 00:18:41.092763 systemd-logind[1433]: Removed session 22. Jul 2 00:18:41.131372 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 53856 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:41.132711 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:41.138139 systemd-logind[1433]: New session 23 of user core. Jul 2 00:18:41.144518 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:18:41.441123 sshd[4180]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:41.450216 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:53856.service: Deactivated successfully. Jul 2 00:18:41.452621 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:18:41.455681 systemd-logind[1433]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:18:41.465035 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:53868.service - OpenSSH per-connection server daemon (10.0.0.1:53868). Jul 2 00:18:41.466243 systemd-logind[1433]: Removed session 23. Jul 2 00:18:41.501623 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 53868 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:41.503816 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:41.510137 systemd-logind[1433]: New session 24 of user core. Jul 2 00:18:41.514476 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:18:41.639002 sshd[4192]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:41.643739 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:53868.service: Deactivated successfully. Jul 2 00:18:41.646960 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:18:41.648187 systemd-logind[1433]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:18:41.650115 systemd-logind[1433]: Removed session 24. Jul 2 00:18:46.660251 systemd[1]: Started sshd@24-10.0.0.74:22-10.0.0.1:53878.service - OpenSSH per-connection server daemon (10.0.0.1:53878). Jul 2 00:18:46.716642 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 53878 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:46.718724 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:46.725710 systemd-logind[1433]: New session 25 of user core. Jul 2 00:18:46.732758 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:18:46.859006 sshd[4206]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:46.863985 systemd[1]: sshd@24-10.0.0.74:22-10.0.0.1:53878.service: Deactivated successfully. Jul 2 00:18:46.866740 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:18:46.868331 systemd-logind[1433]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:18:46.870453 systemd-logind[1433]: Removed session 25. Jul 2 00:18:51.871741 systemd[1]: Started sshd@25-10.0.0.74:22-10.0.0.1:51690.service - OpenSSH per-connection server daemon (10.0.0.1:51690). Jul 2 00:18:51.910627 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 51690 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:51.912200 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:51.916353 systemd-logind[1433]: New session 26 of user core. Jul 2 00:18:51.924474 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:18:52.056466 sshd[4220]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:52.061262 systemd[1]: sshd@25-10.0.0.74:22-10.0.0.1:51690.service: Deactivated successfully. Jul 2 00:18:52.064475 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:18:52.065221 systemd-logind[1433]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:18:52.066268 systemd-logind[1433]: Removed session 26. Jul 2 00:18:52.244547 kubelet[2592]: E0702 00:18:52.244406 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:18:57.067447 systemd[1]: Started sshd@26-10.0.0.74:22-10.0.0.1:51700.service - OpenSSH per-connection server daemon (10.0.0.1:51700). Jul 2 00:18:57.105381 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 51700 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:18:57.106954 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:57.110995 systemd-logind[1433]: New session 27 of user core. Jul 2 00:18:57.120453 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:18:57.227124 sshd[4238]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:57.231528 systemd[1]: sshd@26-10.0.0.74:22-10.0.0.1:51700.service: Deactivated successfully. Jul 2 00:18:57.233675 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:18:57.234462 systemd-logind[1433]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:18:57.235334 systemd-logind[1433]: Removed session 27. Jul 2 00:18:59.248665 kubelet[2592]: E0702 00:18:59.245977 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:01.244943 kubelet[2592]: E0702 00:19:01.244890 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:01.245559 kubelet[2592]: E0702 00:19:01.245496 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:02.239689 systemd[1]: Started sshd@27-10.0.0.74:22-10.0.0.1:34650.service - OpenSSH per-connection server daemon (10.0.0.1:34650). Jul 2 00:19:02.279393 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 34650 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:19:02.281077 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:02.285699 systemd-logind[1433]: New session 28 of user core. Jul 2 00:19:02.295436 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:19:02.432052 sshd[4253]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:02.435960 systemd[1]: sshd@27-10.0.0.74:22-10.0.0.1:34650.service: Deactivated successfully. Jul 2 00:19:02.438077 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:19:02.438766 systemd-logind[1433]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:19:02.439747 systemd-logind[1433]: Removed session 28. Jul 2 00:19:07.443582 systemd[1]: Started sshd@28-10.0.0.74:22-10.0.0.1:34664.service - OpenSSH per-connection server daemon (10.0.0.1:34664). Jul 2 00:19:07.484372 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 34664 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:19:07.486115 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:07.490506 systemd-logind[1433]: New session 29 of user core. Jul 2 00:19:07.501529 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:19:07.787756 sshd[4267]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:07.791899 systemd[1]: sshd@28-10.0.0.74:22-10.0.0.1:34664.service: Deactivated successfully. Jul 2 00:19:07.794313 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:19:07.795160 systemd-logind[1433]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:19:07.796127 systemd-logind[1433]: Removed session 29. Jul 2 00:19:12.800447 systemd[1]: Started sshd@29-10.0.0.74:22-10.0.0.1:60564.service - OpenSSH per-connection server daemon (10.0.0.1:60564). Jul 2 00:19:12.846732 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 60564 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:19:12.848451 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:12.853751 systemd-logind[1433]: New session 30 of user core. Jul 2 00:19:12.863498 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:19:13.048400 sshd[4284]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:13.060448 systemd[1]: sshd@29-10.0.0.74:22-10.0.0.1:60564.service: Deactivated successfully. Jul 2 00:19:13.062210 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:19:13.064077 systemd-logind[1433]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:19:13.065389 systemd[1]: Started sshd@30-10.0.0.74:22-10.0.0.1:60580.service - OpenSSH per-connection server daemon (10.0.0.1:60580). Jul 2 00:19:13.066234 systemd-logind[1433]: Removed session 30. Jul 2 00:19:13.103308 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 60580 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:19:13.104778 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:13.108792 systemd-logind[1433]: New session 31 of user core. Jul 2 00:19:13.114491 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 2 00:19:15.176072 systemd[1]: run-containerd-runc-k8s.io-0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed-runc.6qHA4I.mount: Deactivated successfully. Jul 2 00:19:15.194183 containerd[1452]: time="2024-07-02T00:19:15.194120208Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:19:15.195257 containerd[1452]: time="2024-07-02T00:19:15.195223897Z" level=info msg="StopContainer for \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\" with timeout 2 (s)" Jul 2 00:19:15.195524 containerd[1452]: time="2024-07-02T00:19:15.195498113Z" level=info msg="Stop container \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\" with signal terminated" Jul 2 00:19:15.204685 systemd-networkd[1388]: lxc_health: Link DOWN Jul 2 00:19:15.204696 systemd-networkd[1388]: lxc_health: Lost carrier Jul 2 00:19:15.237823 systemd[1]: cri-containerd-0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed.scope: Deactivated successfully. Jul 2 00:19:15.238141 systemd[1]: cri-containerd-0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed.scope: Consumed 7.591s CPU time. Jul 2 00:19:15.259798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed-rootfs.mount: Deactivated successfully. Jul 2 00:19:15.315096 kubelet[2592]: E0702 00:19:15.315059 2592 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:19:15.341508 containerd[1452]: time="2024-07-02T00:19:15.341447281Z" level=info msg="StopContainer for \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\" with timeout 30 (s)" Jul 2 00:19:15.341989 containerd[1452]: time="2024-07-02T00:19:15.341937064Z" level=info msg="Stop container \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\" with signal terminated" Jul 2 00:19:15.353541 systemd[1]: cri-containerd-e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400.scope: Deactivated successfully. Jul 2 00:19:15.375142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400-rootfs.mount: Deactivated successfully. Jul 2 00:19:15.461252 containerd[1452]: time="2024-07-02T00:19:15.460947991Z" level=info msg="shim disconnected" id=0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed namespace=k8s.io Jul 2 00:19:15.461252 containerd[1452]: time="2024-07-02T00:19:15.461002644Z" level=warning msg="cleaning up after shim disconnected" id=0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed namespace=k8s.io Jul 2 00:19:15.461252 containerd[1452]: time="2024-07-02T00:19:15.461011060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:15.461252 containerd[1452]: time="2024-07-02T00:19:15.461101861Z" level=info msg="shim disconnected" id=e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400 namespace=k8s.io Jul 2 00:19:15.461252 containerd[1452]: time="2024-07-02T00:19:15.461154630Z" level=warning msg="cleaning up after shim disconnected" id=e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400 namespace=k8s.io Jul 2 00:19:15.461252 containerd[1452]: time="2024-07-02T00:19:15.461164048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:15.541190 containerd[1452]: time="2024-07-02T00:19:15.541106626Z" level=info msg="StopContainer for \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\" returns successfully" Jul 2 00:19:15.543744 containerd[1452]: time="2024-07-02T00:19:15.543686446Z" level=info msg="StopContainer for \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\" returns successfully" Jul 2 00:19:15.546814 containerd[1452]: time="2024-07-02T00:19:15.546766357Z" level=info msg="StopPodSandbox for \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\"" Jul 2 00:19:15.548229 containerd[1452]: time="2024-07-02T00:19:15.548189959Z" level=info msg="StopPodSandbox for \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\"" Jul 2 00:19:15.551916 containerd[1452]: time="2024-07-02T00:19:15.546829496Z" level=info msg="Container to stop \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:19:15.554585 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443-shm.mount: Deactivated successfully. Jul 2 00:19:15.560352 systemd[1]: cri-containerd-3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443.scope: Deactivated successfully. Jul 2 00:19:15.566422 containerd[1452]: time="2024-07-02T00:19:15.548248619Z" level=info msg="Container to stop \"04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:19:15.566422 containerd[1452]: time="2024-07-02T00:19:15.566410769Z" level=info msg="Container to stop \"3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:19:15.566422 containerd[1452]: time="2024-07-02T00:19:15.566429725Z" level=info msg="Container to stop \"deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:19:15.566743 containerd[1452]: time="2024-07-02T00:19:15.566445314Z" level=info msg="Container to stop \"a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:19:15.566743 containerd[1452]: time="2024-07-02T00:19:15.566459090Z" level=info msg="Container to stop \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:19:15.578818 systemd[1]: cri-containerd-510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b.scope: Deactivated successfully. Jul 2 00:19:15.777754 containerd[1452]: time="2024-07-02T00:19:15.777613518Z" level=info msg="shim disconnected" id=510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b namespace=k8s.io Jul 2 00:19:15.778247 containerd[1452]: time="2024-07-02T00:19:15.777935434Z" level=warning msg="cleaning up after shim disconnected" id=510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b namespace=k8s.io Jul 2 00:19:15.778247 containerd[1452]: time="2024-07-02T00:19:15.777952677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:15.778247 containerd[1452]: time="2024-07-02T00:19:15.777822562Z" level=info msg="shim disconnected" id=3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443 namespace=k8s.io Jul 2 00:19:15.778247 containerd[1452]: time="2024-07-02T00:19:15.778205172Z" level=warning msg="cleaning up after shim disconnected" id=3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443 namespace=k8s.io Jul 2 00:19:15.778247 containerd[1452]: time="2024-07-02T00:19:15.778217115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:15.795922 containerd[1452]: time="2024-07-02T00:19:15.795866098Z" level=info msg="TearDown network for sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" successfully" Jul 2 00:19:15.795922 containerd[1452]: time="2024-07-02T00:19:15.795903018Z" level=info msg="StopPodSandbox for \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" returns successfully" Jul 2 00:19:15.796135 containerd[1452]: time="2024-07-02T00:19:15.796089619Z" level=info msg="TearDown network for sandbox \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\" successfully" Jul 2 00:19:15.796135 containerd[1452]: time="2024-07-02T00:19:15.796127841Z" level=info msg="StopPodSandbox for \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\" returns successfully" Jul 2 00:19:16.002360 kubelet[2592]: I0702 00:19:16.002271 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f49b30bb-29a7-49fa-a312-9f3044e3341a-clustermesh-secrets\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002360 kubelet[2592]: I0702 00:19:16.002349 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-run\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002360 kubelet[2592]: I0702 00:19:16.002371 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-etc-cni-netd\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002622 kubelet[2592]: I0702 00:19:16.002389 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f49b30bb-29a7-49fa-a312-9f3044e3341a-hubble-tls\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002622 kubelet[2592]: I0702 00:19:16.002409 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-bpf-maps\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002622 kubelet[2592]: I0702 00:19:16.002431 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3-cilium-config-path\") pod \"f09bc6a0-f822-4dc8-8a26-ece77b49ccd3\" (UID: \"f09bc6a0-f822-4dc8-8a26-ece77b49ccd3\") " Jul 2 00:19:16.002622 kubelet[2592]: I0702 00:19:16.002447 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cni-path\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002622 kubelet[2592]: I0702 00:19:16.002444 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.002622 kubelet[2592]: I0702 00:19:16.002498 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.002770 kubelet[2592]: I0702 00:19:16.002477 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-lib-modules\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002770 kubelet[2592]: I0702 00:19:16.002526 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.002770 kubelet[2592]: I0702 00:19:16.002545 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-hostproc\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002770 kubelet[2592]: I0702 00:19:16.002567 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-host-proc-sys-net\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002770 kubelet[2592]: I0702 00:19:16.002587 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-host-proc-sys-kernel\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002770 kubelet[2592]: I0702 00:19:16.002603 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-cgroup\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002911 kubelet[2592]: I0702 00:19:16.002627 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-config-path\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002911 kubelet[2592]: I0702 00:19:16.002645 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-xtables-lock\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002911 kubelet[2592]: I0702 00:19:16.002669 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h45np\" (UniqueName: \"kubernetes.io/projected/f49b30bb-29a7-49fa-a312-9f3044e3341a-kube-api-access-h45np\") pod \"f49b30bb-29a7-49fa-a312-9f3044e3341a\" (UID: \"f49b30bb-29a7-49fa-a312-9f3044e3341a\") " Jul 2 00:19:16.002911 kubelet[2592]: I0702 00:19:16.002693 2592 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z66sx\" (UniqueName: \"kubernetes.io/projected/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3-kube-api-access-z66sx\") pod \"f09bc6a0-f822-4dc8-8a26-ece77b49ccd3\" (UID: \"f09bc6a0-f822-4dc8-8a26-ece77b49ccd3\") " Jul 2 00:19:16.002911 kubelet[2592]: I0702 00:19:16.002720 2592 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.002911 kubelet[2592]: I0702 00:19:16.002731 2592 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.002911 kubelet[2592]: I0702 00:19:16.002742 2592 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.007084 kubelet[2592]: I0702 00:19:16.006653 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f09bc6a0-f822-4dc8-8a26-ece77b49ccd3" (UID: "f09bc6a0-f822-4dc8-8a26-ece77b49ccd3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:19:16.007084 kubelet[2592]: I0702 00:19:16.006713 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.007084 kubelet[2592]: I0702 00:19:16.006731 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cni-path" (OuterVolumeSpecName: "cni-path") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.007084 kubelet[2592]: I0702 00:19:16.006755 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.007084 kubelet[2592]: I0702 00:19:16.006777 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-hostproc" (OuterVolumeSpecName: "hostproc") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.007275 kubelet[2592]: I0702 00:19:16.006797 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.007275 kubelet[2592]: I0702 00:19:16.006819 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.007275 kubelet[2592]: I0702 00:19:16.006850 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:19:16.010441 kubelet[2592]: I0702 00:19:16.010372 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f49b30bb-29a7-49fa-a312-9f3044e3341a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:19:16.010536 kubelet[2592]: I0702 00:19:16.010448 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3-kube-api-access-z66sx" (OuterVolumeSpecName: "kube-api-access-z66sx") pod "f09bc6a0-f822-4dc8-8a26-ece77b49ccd3" (UID: "f09bc6a0-f822-4dc8-8a26-ece77b49ccd3"). InnerVolumeSpecName "kube-api-access-z66sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:19:16.010714 kubelet[2592]: I0702 00:19:16.010649 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49b30bb-29a7-49fa-a312-9f3044e3341a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:19:16.010967 kubelet[2592]: I0702 00:19:16.010939 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:19:16.011226 kubelet[2592]: I0702 00:19:16.011194 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49b30bb-29a7-49fa-a312-9f3044e3341a-kube-api-access-h45np" (OuterVolumeSpecName: "kube-api-access-h45np") pod "f49b30bb-29a7-49fa-a312-9f3044e3341a" (UID: "f49b30bb-29a7-49fa-a312-9f3044e3341a"). InnerVolumeSpecName "kube-api-access-h45np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:19:16.103506 kubelet[2592]: I0702 00:19:16.103375 2592 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103506 kubelet[2592]: I0702 00:19:16.103409 2592 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103506 kubelet[2592]: I0702 00:19:16.103423 2592 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f49b30bb-29a7-49fa-a312-9f3044e3341a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103506 kubelet[2592]: I0702 00:19:16.103433 2592 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z66sx\" (UniqueName: \"kubernetes.io/projected/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3-kube-api-access-z66sx\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103506 kubelet[2592]: I0702 00:19:16.103443 2592 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h45np\" (UniqueName: \"kubernetes.io/projected/f49b30bb-29a7-49fa-a312-9f3044e3341a-kube-api-access-h45np\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103506 kubelet[2592]: I0702 00:19:16.103453 2592 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f49b30bb-29a7-49fa-a312-9f3044e3341a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103506 kubelet[2592]: I0702 00:19:16.103473 2592 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103506 kubelet[2592]: I0702 00:19:16.103485 2592 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f49b30bb-29a7-49fa-a312-9f3044e3341a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103774 kubelet[2592]: I0702 00:19:16.103499 2592 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103774 kubelet[2592]: I0702 00:19:16.103511 2592 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103774 kubelet[2592]: I0702 00:19:16.103523 2592 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103774 kubelet[2592]: I0702 00:19:16.103535 2592 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.103774 kubelet[2592]: I0702 00:19:16.103554 2592 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f49b30bb-29a7-49fa-a312-9f3044e3341a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:19:16.169518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443-rootfs.mount: Deactivated successfully. Jul 2 00:19:16.169641 systemd[1]: var-lib-kubelet-pods-f09bc6a0\x2df822\x2d4dc8\x2d8a26\x2dece77b49ccd3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz66sx.mount: Deactivated successfully. Jul 2 00:19:16.169742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b-rootfs.mount: Deactivated successfully. Jul 2 00:19:16.169818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b-shm.mount: Deactivated successfully. Jul 2 00:19:16.169903 systemd[1]: var-lib-kubelet-pods-f49b30bb\x2d29a7\x2d49fa\x2da312\x2d9f3044e3341a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh45np.mount: Deactivated successfully. Jul 2 00:19:16.169977 systemd[1]: var-lib-kubelet-pods-f49b30bb\x2d29a7\x2d49fa\x2da312\x2d9f3044e3341a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:19:16.170051 systemd[1]: var-lib-kubelet-pods-f49b30bb\x2d29a7\x2d49fa\x2da312\x2d9f3044e3341a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:19:16.710097 sshd[4300]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:16.721412 systemd[1]: sshd@30-10.0.0.74:22-10.0.0.1:60580.service: Deactivated successfully. Jul 2 00:19:16.723357 systemd[1]: session-31.scope: Deactivated successfully. Jul 2 00:19:16.725110 systemd-logind[1433]: Session 31 logged out. Waiting for processes to exit. Jul 2 00:19:16.730609 systemd[1]: Started sshd@31-10.0.0.74:22-10.0.0.1:60588.service - OpenSSH per-connection server daemon (10.0.0.1:60588). Jul 2 00:19:16.731705 systemd-logind[1433]: Removed session 31. Jul 2 00:19:16.768340 kubelet[2592]: I0702 00:19:16.765983 2592 scope.go:117] "RemoveContainer" containerID="0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed" Jul 2 00:19:16.769363 containerd[1452]: time="2024-07-02T00:19:16.769317315Z" level=info msg="RemoveContainer for \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\"" Jul 2 00:19:16.779247 systemd[1]: Removed slice kubepods-burstable-podf49b30bb_29a7_49fa_a312_9f3044e3341a.slice - libcontainer container kubepods-burstable-podf49b30bb_29a7_49fa_a312_9f3044e3341a.slice. Jul 2 00:19:16.779421 systemd[1]: kubepods-burstable-podf49b30bb_29a7_49fa_a312_9f3044e3341a.slice: Consumed 7.699s CPU time. Jul 2 00:19:16.780767 systemd[1]: Removed slice kubepods-besteffort-podf09bc6a0_f822_4dc8_8a26_ece77b49ccd3.slice - libcontainer container kubepods-besteffort-podf09bc6a0_f822_4dc8_8a26_ece77b49ccd3.slice. Jul 2 00:19:16.807499 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 60588 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:19:16.809374 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:16.811001 containerd[1452]: time="2024-07-02T00:19:16.810948327Z" level=info msg="RemoveContainer for \"0d974057915de228d91f6d49fe9d2e1e0a15080e5fb25e72d0b2e10e9797a0ed\" returns successfully" Jul 2 00:19:16.811332 kubelet[2592]: I0702 00:19:16.811273 2592 scope.go:117] "RemoveContainer" containerID="a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8" Jul 2 00:19:16.812952 containerd[1452]: time="2024-07-02T00:19:16.812925441Z" level=info msg="RemoveContainer for \"a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8\"" Jul 2 00:19:16.818978 systemd-logind[1433]: New session 32 of user core. Jul 2 00:19:16.833545 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 2 00:19:16.851272 containerd[1452]: time="2024-07-02T00:19:16.851202456Z" level=info msg="RemoveContainer for \"a4f2278161e21ad0bba83fb6a9746bf403d05c61488f6417f3df2d0e591c92d8\" returns successfully" Jul 2 00:19:16.851546 kubelet[2592]: I0702 00:19:16.851500 2592 scope.go:117] "RemoveContainer" containerID="deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c" Jul 2 00:19:16.852784 containerd[1452]: time="2024-07-02T00:19:16.852742928Z" level=info msg="RemoveContainer for \"deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c\"" Jul 2 00:19:16.871092 containerd[1452]: time="2024-07-02T00:19:16.871031354Z" level=info msg="RemoveContainer for \"deacb04925f1058298c5612fc58754d00d915727dbe23e725cfb22f008ca729c\" returns successfully" Jul 2 00:19:16.871466 kubelet[2592]: I0702 00:19:16.871390 2592 scope.go:117] "RemoveContainer" containerID="04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad" Jul 2 00:19:16.872658 containerd[1452]: time="2024-07-02T00:19:16.872615809Z" level=info msg="RemoveContainer for \"04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad\"" Jul 2 00:19:16.908049 kubelet[2592]: I0702 00:19:16.907998 2592 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:19:16Z","lastTransitionTime":"2024-07-02T00:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:19:16.916211 containerd[1452]: time="2024-07-02T00:19:16.916129328Z" level=info msg="RemoveContainer for \"04c384b75cda6454783802fb6c3bc4e5366762e4e503f7f26d2c4511ba5458ad\" returns successfully" Jul 2 00:19:16.916895 kubelet[2592]: I0702 00:19:16.916519 2592 scope.go:117] "RemoveContainer" containerID="3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c" Jul 2 00:19:16.919709 containerd[1452]: time="2024-07-02T00:19:16.919363229Z" level=info msg="RemoveContainer for \"3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c\"" Jul 2 00:19:16.968960 containerd[1452]: time="2024-07-02T00:19:16.968819140Z" level=info msg="RemoveContainer for \"3e88f7e73086981cbe177ca26af6da17b511a721c36c34c6d04eec4dea59346c\" returns successfully" Jul 2 00:19:16.969309 kubelet[2592]: I0702 00:19:16.969116 2592 scope.go:117] "RemoveContainer" containerID="e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400" Jul 2 00:19:16.970539 containerd[1452]: time="2024-07-02T00:19:16.970490678Z" level=info msg="RemoveContainer for \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\"" Jul 2 00:19:17.009081 containerd[1452]: time="2024-07-02T00:19:17.009027522Z" level=info msg="RemoveContainer for \"e6c0d81bd5773321cf1d4a8c62065081c2e26ec0d821b1af0d83a47971a8e400\" returns successfully" Jul 2 00:19:17.247354 kubelet[2592]: I0702 00:19:17.247228 2592 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f09bc6a0-f822-4dc8-8a26-ece77b49ccd3" path="/var/lib/kubelet/pods/f09bc6a0-f822-4dc8-8a26-ece77b49ccd3/volumes" Jul 2 00:19:17.247852 kubelet[2592]: I0702 00:19:17.247828 2592 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f49b30bb-29a7-49fa-a312-9f3044e3341a" path="/var/lib/kubelet/pods/f49b30bb-29a7-49fa-a312-9f3044e3341a/volumes" Jul 2 00:19:17.651647 sshd[4463]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:17.662937 systemd[1]: sshd@31-10.0.0.74:22-10.0.0.1:60588.service: Deactivated successfully. Jul 2 00:19:17.665278 systemd[1]: session-32.scope: Deactivated successfully. Jul 2 00:19:17.667266 systemd-logind[1433]: Session 32 logged out. Waiting for processes to exit. Jul 2 00:19:17.676258 systemd[1]: Started sshd@32-10.0.0.74:22-10.0.0.1:60598.service - OpenSSH per-connection server daemon (10.0.0.1:60598). Jul 2 00:19:17.678252 systemd-logind[1433]: Removed session 32. Jul 2 00:19:17.709968 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 60598 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:19:17.711374 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:17.715161 systemd-logind[1433]: New session 33 of user core. Jul 2 00:19:17.725449 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 2 00:19:17.777354 sshd[4476]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:17.785464 systemd[1]: sshd@32-10.0.0.74:22-10.0.0.1:60598.service: Deactivated successfully. Jul 2 00:19:17.787575 systemd[1]: session-33.scope: Deactivated successfully. Jul 2 00:19:17.789358 systemd-logind[1433]: Session 33 logged out. Waiting for processes to exit. Jul 2 00:19:17.804669 systemd[1]: Started sshd@33-10.0.0.74:22-10.0.0.1:60608.service - OpenSSH per-connection server daemon (10.0.0.1:60608). Jul 2 00:19:17.805609 systemd-logind[1433]: Removed session 33. Jul 2 00:19:17.838992 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 60608 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:19:17.840707 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:19:17.844610 systemd-logind[1433]: New session 34 of user core. Jul 2 00:19:17.851442 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 2 00:19:17.903541 kubelet[2592]: I0702 00:19:17.902542 2592 topology_manager.go:215] "Topology Admit Handler" podUID="1a888589-f6db-46bb-a0ea-433463cc1ff6" podNamespace="kube-system" podName="cilium-jxfsk" Jul 2 00:19:17.903541 kubelet[2592]: E0702 00:19:17.902610 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f49b30bb-29a7-49fa-a312-9f3044e3341a" containerName="apply-sysctl-overwrites" Jul 2 00:19:17.903541 kubelet[2592]: E0702 00:19:17.902620 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f09bc6a0-f822-4dc8-8a26-ece77b49ccd3" containerName="cilium-operator" Jul 2 00:19:17.903541 kubelet[2592]: E0702 00:19:17.902627 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f49b30bb-29a7-49fa-a312-9f3044e3341a" containerName="mount-bpf-fs" Jul 2 00:19:17.903541 kubelet[2592]: E0702 00:19:17.902633 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f49b30bb-29a7-49fa-a312-9f3044e3341a" containerName="clean-cilium-state" Jul 2 00:19:17.903541 kubelet[2592]: E0702 00:19:17.902640 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f49b30bb-29a7-49fa-a312-9f3044e3341a" containerName="cilium-agent" Jul 2 00:19:17.903541 kubelet[2592]: E0702 00:19:17.902649 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f49b30bb-29a7-49fa-a312-9f3044e3341a" containerName="mount-cgroup" Jul 2 00:19:17.903541 kubelet[2592]: I0702 00:19:17.902670 2592 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09bc6a0-f822-4dc8-8a26-ece77b49ccd3" containerName="cilium-operator" Jul 2 00:19:17.903541 kubelet[2592]: I0702 00:19:17.902677 2592 memory_manager.go:354] "RemoveStaleState removing state" podUID="f49b30bb-29a7-49fa-a312-9f3044e3341a" containerName="cilium-agent" Jul 2 00:19:17.916093 systemd[1]: Created slice kubepods-burstable-pod1a888589_f6db_46bb_a0ea_433463cc1ff6.slice - libcontainer container kubepods-burstable-pod1a888589_f6db_46bb_a0ea_433463cc1ff6.slice. Jul 2 00:19:18.015230 kubelet[2592]: I0702 00:19:18.015179 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-bpf-maps\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015387 kubelet[2592]: I0702 00:19:18.015249 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a888589-f6db-46bb-a0ea-433463cc1ff6-clustermesh-secrets\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015387 kubelet[2592]: I0702 00:19:18.015273 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpz94\" (UniqueName: \"kubernetes.io/projected/1a888589-f6db-46bb-a0ea-433463cc1ff6-kube-api-access-cpz94\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015387 kubelet[2592]: I0702 00:19:18.015292 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a888589-f6db-46bb-a0ea-433463cc1ff6-cilium-ipsec-secrets\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015387 kubelet[2592]: I0702 00:19:18.015381 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-cni-path\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015526 kubelet[2592]: I0702 00:19:18.015417 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-xtables-lock\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015526 kubelet[2592]: I0702 00:19:18.015486 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a888589-f6db-46bb-a0ea-433463cc1ff6-cilium-config-path\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015588 kubelet[2592]: I0702 00:19:18.015553 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-lib-modules\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015663 kubelet[2592]: I0702 00:19:18.015641 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-cilium-run\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015705 kubelet[2592]: I0702 00:19:18.015689 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-hostproc\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015730 kubelet[2592]: I0702 00:19:18.015719 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-host-proc-sys-net\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015781 kubelet[2592]: I0702 00:19:18.015765 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-host-proc-sys-kernel\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015824 kubelet[2592]: I0702 00:19:18.015802 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a888589-f6db-46bb-a0ea-433463cc1ff6-hubble-tls\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.015888 kubelet[2592]: I0702 00:19:18.015851 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-cilium-cgroup\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.016024 kubelet[2592]: I0702 00:19:18.015966 2592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a888589-f6db-46bb-a0ea-433463cc1ff6-etc-cni-netd\") pod \"cilium-jxfsk\" (UID: \"1a888589-f6db-46bb-a0ea-433463cc1ff6\") " pod="kube-system/cilium-jxfsk" Jul 2 00:19:18.219354 kubelet[2592]: E0702 00:19:18.219198 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:18.219936 containerd[1452]: time="2024-07-02T00:19:18.219873253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxfsk,Uid:1a888589-f6db-46bb-a0ea-433463cc1ff6,Namespace:kube-system,Attempt:0,}" Jul 2 00:19:18.352701 containerd[1452]: time="2024-07-02T00:19:18.352413878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:18.352701 containerd[1452]: time="2024-07-02T00:19:18.352549093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:18.352701 containerd[1452]: time="2024-07-02T00:19:18.352674739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:18.352864 containerd[1452]: time="2024-07-02T00:19:18.352704726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:18.376514 systemd[1]: Started cri-containerd-364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6.scope - libcontainer container 364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6. Jul 2 00:19:18.397431 containerd[1452]: time="2024-07-02T00:19:18.397378263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxfsk,Uid:1a888589-f6db-46bb-a0ea-433463cc1ff6,Namespace:kube-system,Attempt:0,} returns sandbox id \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\"" Jul 2 00:19:18.398028 kubelet[2592]: E0702 00:19:18.398006 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:18.400590 containerd[1452]: time="2024-07-02T00:19:18.400556018Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:19:18.598560 containerd[1452]: time="2024-07-02T00:19:18.598291439Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15c61275058400016fcec1196f32b26f09a1ee89bda172bf146e1844b137e91f\"" Jul 2 00:19:18.598990 containerd[1452]: time="2024-07-02T00:19:18.598930863Z" level=info msg="StartContainer for \"15c61275058400016fcec1196f32b26f09a1ee89bda172bf146e1844b137e91f\"" Jul 2 00:19:18.628448 systemd[1]: Started cri-containerd-15c61275058400016fcec1196f32b26f09a1ee89bda172bf146e1844b137e91f.scope - libcontainer container 15c61275058400016fcec1196f32b26f09a1ee89bda172bf146e1844b137e91f. Jul 2 00:19:18.696749 systemd[1]: cri-containerd-15c61275058400016fcec1196f32b26f09a1ee89bda172bf146e1844b137e91f.scope: Deactivated successfully. Jul 2 00:19:18.715121 containerd[1452]: time="2024-07-02T00:19:18.715069257Z" level=info msg="StartContainer for \"15c61275058400016fcec1196f32b26f09a1ee89bda172bf146e1844b137e91f\" returns successfully" Jul 2 00:19:18.776735 kubelet[2592]: E0702 00:19:18.776712 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:18.843731 containerd[1452]: time="2024-07-02T00:19:18.843646586Z" level=info msg="shim disconnected" id=15c61275058400016fcec1196f32b26f09a1ee89bda172bf146e1844b137e91f namespace=k8s.io Jul 2 00:19:18.843731 containerd[1452]: time="2024-07-02T00:19:18.843696351Z" level=warning msg="cleaning up after shim disconnected" id=15c61275058400016fcec1196f32b26f09a1ee89bda172bf146e1844b137e91f namespace=k8s.io Jul 2 00:19:18.843731 containerd[1452]: time="2024-07-02T00:19:18.843707522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:19.779364 kubelet[2592]: E0702 00:19:19.779313 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:19.781493 containerd[1452]: time="2024-07-02T00:19:19.781455108Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:19:20.068987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896892246.mount: Deactivated successfully. Jul 2 00:19:20.268377 containerd[1452]: time="2024-07-02T00:19:20.268284726Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a\"" Jul 2 00:19:20.268945 containerd[1452]: time="2024-07-02T00:19:20.268903872Z" level=info msg="StartContainer for \"1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a\"" Jul 2 00:19:20.300561 systemd[1]: Started cri-containerd-1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a.scope - libcontainer container 1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a. Jul 2 00:19:20.316080 kubelet[2592]: E0702 00:19:20.315983 2592 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:19:20.334698 systemd[1]: cri-containerd-1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a.scope: Deactivated successfully. Jul 2 00:19:20.382658 containerd[1452]: time="2024-07-02T00:19:20.382568971Z" level=info msg="StartContainer for \"1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a\" returns successfully" Jul 2 00:19:20.405561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a-rootfs.mount: Deactivated successfully. Jul 2 00:19:20.558799 containerd[1452]: time="2024-07-02T00:19:20.558716512Z" level=info msg="shim disconnected" id=1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a namespace=k8s.io Jul 2 00:19:20.558799 containerd[1452]: time="2024-07-02T00:19:20.558779682Z" level=warning msg="cleaning up after shim disconnected" id=1d741e2f029be12b4db0ed9d10489164f205bf3077308fc6e4a43739afe81d2a namespace=k8s.io Jul 2 00:19:20.558799 containerd[1452]: time="2024-07-02T00:19:20.558791755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:20.782691 kubelet[2592]: E0702 00:19:20.782656 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:20.785506 containerd[1452]: time="2024-07-02T00:19:20.785463345Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:19:21.265797 containerd[1452]: time="2024-07-02T00:19:21.265754749Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82\"" Jul 2 00:19:21.266351 containerd[1452]: time="2024-07-02T00:19:21.266326766Z" level=info msg="StartContainer for \"426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82\"" Jul 2 00:19:21.302599 systemd[1]: Started cri-containerd-426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82.scope - libcontainer container 426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82. Jul 2 00:19:21.378443 systemd[1]: cri-containerd-426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82.scope: Deactivated successfully. Jul 2 00:19:21.387013 containerd[1452]: time="2024-07-02T00:19:21.386978627Z" level=info msg="StartContainer for \"426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82\" returns successfully" Jul 2 00:19:21.408552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82-rootfs.mount: Deactivated successfully. Jul 2 00:19:21.496883 containerd[1452]: time="2024-07-02T00:19:21.496820150Z" level=info msg="shim disconnected" id=426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82 namespace=k8s.io Jul 2 00:19:21.496883 containerd[1452]: time="2024-07-02T00:19:21.496876175Z" level=warning msg="cleaning up after shim disconnected" id=426c16ef536a810358c8cfcc1e8bcd4015b407e7e8ca1567405cbe5c91dc4b82 namespace=k8s.io Jul 2 00:19:21.496883 containerd[1452]: time="2024-07-02T00:19:21.496885153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:21.510753 containerd[1452]: time="2024-07-02T00:19:21.510702644Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:19:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:19:21.786320 kubelet[2592]: E0702 00:19:21.786271 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:21.788199 containerd[1452]: time="2024-07-02T00:19:21.788112200Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:19:22.168167 containerd[1452]: time="2024-07-02T00:19:22.168021744Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d\"" Jul 2 00:19:22.168713 containerd[1452]: time="2024-07-02T00:19:22.168654527Z" level=info msg="StartContainer for \"8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d\"" Jul 2 00:19:22.199476 systemd[1]: Started cri-containerd-8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d.scope - libcontainer container 8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d. Jul 2 00:19:22.223733 systemd[1]: cri-containerd-8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d.scope: Deactivated successfully. Jul 2 00:19:22.347607 containerd[1452]: time="2024-07-02T00:19:22.347557044Z" level=info msg="StartContainer for \"8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d\" returns successfully" Jul 2 00:19:22.365572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d-rootfs.mount: Deactivated successfully. Jul 2 00:19:22.557481 containerd[1452]: time="2024-07-02T00:19:22.557404366Z" level=info msg="shim disconnected" id=8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d namespace=k8s.io Jul 2 00:19:22.557481 containerd[1452]: time="2024-07-02T00:19:22.557465981Z" level=warning msg="cleaning up after shim disconnected" id=8a0775e3133713f535ea653f8776ba2e12e28b73b8b7a14c4f1e1421949b299d namespace=k8s.io Jul 2 00:19:22.557481 containerd[1452]: time="2024-07-02T00:19:22.557478084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:19:22.790536 kubelet[2592]: E0702 00:19:22.790489 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:22.793231 containerd[1452]: time="2024-07-02T00:19:22.793184793Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:19:23.246466 containerd[1452]: time="2024-07-02T00:19:23.246417257Z" level=info msg="CreateContainer within sandbox \"364a974704873ed3d2f049d85f23b1d5a07543df89351c8d696704da674cadd6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65a62a7ff5f41761d609eae221a8ebd7c309e74257c553c9afa70998d38c9cbb\"" Jul 2 00:19:23.246750 containerd[1452]: time="2024-07-02T00:19:23.246719135Z" level=info msg="StartContainer for \"65a62a7ff5f41761d609eae221a8ebd7c309e74257c553c9afa70998d38c9cbb\"" Jul 2 00:19:23.273448 systemd[1]: Started cri-containerd-65a62a7ff5f41761d609eae221a8ebd7c309e74257c553c9afa70998d38c9cbb.scope - libcontainer container 65a62a7ff5f41761d609eae221a8ebd7c309e74257c553c9afa70998d38c9cbb. Jul 2 00:19:23.371942 containerd[1452]: time="2024-07-02T00:19:23.371877759Z" level=info msg="StartContainer for \"65a62a7ff5f41761d609eae221a8ebd7c309e74257c553c9afa70998d38c9cbb\" returns successfully" Jul 2 00:19:23.795262 kubelet[2592]: E0702 00:19:23.795219 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:23.857344 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 00:19:23.858034 kubelet[2592]: I0702 00:19:23.857895 2592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jxfsk" podStartSLOduration=6.857844653 podStartE2EDuration="6.857844653s" podCreationTimestamp="2024-07-02 00:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:19:23.857424522 +0000 UTC m=+118.713147074" watchObservedRunningTime="2024-07-02 00:19:23.857844653 +0000 UTC m=+118.713567206" Jul 2 00:19:24.244640 kubelet[2592]: E0702 00:19:24.244497 2592 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-t472v" podUID="c818689f-1fab-4aa0-8f98-c18c2de25d5e" Jul 2 00:19:24.797662 kubelet[2592]: E0702 00:19:24.797616 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:25.333848 containerd[1452]: time="2024-07-02T00:19:25.333788202Z" level=info msg="StopPodSandbox for \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\"" Jul 2 00:19:25.334336 containerd[1452]: time="2024-07-02T00:19:25.333914490Z" level=info msg="TearDown network for sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" successfully" Jul 2 00:19:25.334336 containerd[1452]: time="2024-07-02T00:19:25.333928516Z" level=info msg="StopPodSandbox for \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" returns successfully" Jul 2 00:19:25.334882 containerd[1452]: time="2024-07-02T00:19:25.334827268Z" level=info msg="RemovePodSandbox for \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\"" Jul 2 00:19:25.334939 containerd[1452]: time="2024-07-02T00:19:25.334882272Z" level=info msg="Forcibly stopping sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\"" Jul 2 00:19:25.341733 containerd[1452]: time="2024-07-02T00:19:25.334979636Z" level=info msg="TearDown network for sandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" successfully" Jul 2 00:19:25.419102 containerd[1452]: time="2024-07-02T00:19:25.418888173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:19:25.419102 containerd[1452]: time="2024-07-02T00:19:25.418974726Z" level=info msg="RemovePodSandbox \"510ab6ae6367cb4457589b3838502d66b05bd3c8c8ff19192adca7dbc75f937b\" returns successfully" Jul 2 00:19:25.419837 containerd[1452]: time="2024-07-02T00:19:25.419755446Z" level=info msg="StopPodSandbox for \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\"" Jul 2 00:19:25.419928 containerd[1452]: time="2024-07-02T00:19:25.419896761Z" level=info msg="TearDown network for sandbox \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\" successfully" Jul 2 00:19:25.419987 containerd[1452]: time="2024-07-02T00:19:25.419924504Z" level=info msg="StopPodSandbox for \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\" returns successfully" Jul 2 00:19:25.420383 containerd[1452]: time="2024-07-02T00:19:25.420339857Z" level=info msg="RemovePodSandbox for \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\"" Jul 2 00:19:25.420383 containerd[1452]: time="2024-07-02T00:19:25.420376185Z" level=info msg="Forcibly stopping sandbox \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\"" Jul 2 00:19:25.420568 containerd[1452]: time="2024-07-02T00:19:25.420479610Z" level=info msg="TearDown network for sandbox \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\" successfully" Jul 2 00:19:25.527245 containerd[1452]: time="2024-07-02T00:19:25.527181125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:19:25.527420 containerd[1452]: time="2024-07-02T00:19:25.527256528Z" level=info msg="RemovePodSandbox \"3a03b7dab2471d6d875c5cd179d763f200568bda3357126b703302fbeedd7443\" returns successfully" Jul 2 00:19:26.244769 kubelet[2592]: E0702 00:19:26.244729 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:27.126593 systemd-networkd[1388]: lxc_health: Link UP Jul 2 00:19:27.134498 systemd-networkd[1388]: lxc_health: Gained carrier Jul 2 00:19:28.221791 kubelet[2592]: E0702 00:19:28.221751 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:28.431676 systemd-networkd[1388]: lxc_health: Gained IPv6LL Jul 2 00:19:28.809533 kubelet[2592]: E0702 00:19:28.805384 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:28.811839 systemd[1]: run-containerd-runc-k8s.io-65a62a7ff5f41761d609eae221a8ebd7c309e74257c553c9afa70998d38c9cbb-runc.S1o9GJ.mount: Deactivated successfully. Jul 2 00:19:29.807767 kubelet[2592]: E0702 00:19:29.807726 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:31.239143 systemd[1]: run-containerd-runc-k8s.io-65a62a7ff5f41761d609eae221a8ebd7c309e74257c553c9afa70998d38c9cbb-runc.MJ8pkH.mount: Deactivated successfully. Jul 2 00:19:36.391708 sshd[4484]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:36.395290 systemd[1]: sshd@33-10.0.0.74:22-10.0.0.1:60608.service: Deactivated successfully. Jul 2 00:19:36.397115 systemd[1]: session-34.scope: Deactivated successfully. Jul 2 00:19:36.397955 systemd-logind[1433]: Session 34 logged out. Waiting for processes to exit. Jul 2 00:19:36.398967 systemd-logind[1433]: Removed session 34. Jul 2 00:19:38.245209 kubelet[2592]: E0702 00:19:38.245149 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"