Jul 2 00:28:00.009442 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:28:00.009470 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:28:00.009485 kernel: BIOS-provided physical RAM map: Jul 2 00:28:00.009494 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 00:28:00.009502 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 00:28:00.009510 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 00:28:00.009520 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 00:28:00.009529 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 00:28:00.009538 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 00:28:00.009546 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 00:28:00.009558 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 00:28:00.009567 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 2 00:28:00.009575 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 2 00:28:00.009584 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 2 00:28:00.009595 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 00:28:00.009608 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 00:28:00.009617 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 00:28:00.009626 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 00:28:00.009635 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 00:28:00.009644 kernel: NX (Execute Disable) protection: active Jul 2 00:28:00.009653 kernel: APIC: Static calls initialized Jul 2 00:28:00.009663 kernel: efi: EFI v2.7 by EDK II Jul 2 00:28:00.009672 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b5df418 Jul 2 00:28:00.009681 kernel: SMBIOS 2.8 present. Jul 2 00:28:00.009690 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 00:28:00.009700 kernel: Hypervisor detected: KVM Jul 2 00:28:00.009709 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:28:00.009721 kernel: kvm-clock: using sched offset of 5701703911 cycles Jul 2 00:28:00.009731 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:28:00.009741 kernel: tsc: Detected 2794.746 MHz processor Jul 2 00:28:00.009751 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:28:00.009761 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:28:00.009770 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 00:28:00.009780 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 2 00:28:00.009789 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:28:00.009799 kernel: Using GB pages for direct mapping Jul 2 00:28:00.009811 kernel: Secure boot disabled Jul 2 00:28:00.009821 kernel: ACPI: Early table checksum verification disabled Jul 2 00:28:00.009831 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 00:28:00.009840 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:28:00.009855 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:28:00.009865 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:28:00.009878 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 00:28:00.009888 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:28:00.009898 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:28:00.009909 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:28:00.009919 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 00:28:00.009929 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 00:28:00.009939 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 00:28:00.009994 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 00:28:00.010010 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 00:28:00.010021 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 00:28:00.010031 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 00:28:00.010040 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 00:28:00.010050 kernel: No NUMA configuration found Jul 2 00:28:00.010060 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 00:28:00.010070 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 00:28:00.010080 kernel: Zone ranges: Jul 2 00:28:00.010090 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:28:00.010104 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 00:28:00.010114 kernel: Normal empty Jul 2 00:28:00.010124 kernel: Movable zone start for each node Jul 2 00:28:00.010134 kernel: Early memory node ranges Jul 2 00:28:00.010144 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 00:28:00.010154 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 00:28:00.010164 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 00:28:00.010174 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 00:28:00.010184 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 00:28:00.010193 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 00:28:00.010207 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 00:28:00.010217 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:28:00.010227 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 00:28:00.010237 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 00:28:00.010247 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:28:00.010257 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 00:28:00.010267 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 00:28:00.010277 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 00:28:00.010287 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:28:00.010300 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:28:00.010310 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:28:00.010320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:28:00.010330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:28:00.010341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:28:00.010351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:28:00.010361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:28:00.010371 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:28:00.010381 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:28:00.010394 kernel: TSC deadline timer available Jul 2 00:28:00.010404 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 00:28:00.010414 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:28:00.010424 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 00:28:00.010434 kernel: kvm-guest: setup PV sched yield Jul 2 00:28:00.010444 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 00:28:00.010454 kernel: Booting paravirtualized kernel on KVM Jul 2 00:28:00.010464 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:28:00.010475 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 00:28:00.010488 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jul 2 00:28:00.010498 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jul 2 00:28:00.010508 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 00:28:00.010518 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:28:00.010528 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:28:00.010540 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:28:00.010551 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:28:00.010560 kernel: random: crng init done Jul 2 00:28:00.010571 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:28:00.010585 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:28:00.010595 kernel: Fallback order for Node 0: 0 Jul 2 00:28:00.010605 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 00:28:00.010615 kernel: Policy zone: DMA32 Jul 2 00:28:00.010625 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:28:00.010635 kernel: Memory: 2395520K/2567000K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 171220K reserved, 0K cma-reserved) Jul 2 00:28:00.010645 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:28:00.010654 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:28:00.010664 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:28:00.010677 kernel: Dynamic Preempt: voluntary Jul 2 00:28:00.010687 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:28:00.010698 kernel: rcu: RCU event tracing is enabled. Jul 2 00:28:00.010708 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:28:00.010730 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:28:00.010743 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:28:00.010754 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:28:00.010765 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:28:00.010776 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:28:00.010786 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 00:28:00.010797 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:28:00.010807 kernel: Console: colour dummy device 80x25 Jul 2 00:28:00.010821 kernel: printk: console [ttyS0] enabled Jul 2 00:28:00.010831 kernel: ACPI: Core revision 20230628 Jul 2 00:28:00.010868 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:28:00.010880 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:28:00.010891 kernel: x2apic enabled Jul 2 00:28:00.010906 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:28:00.010916 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 2 00:28:00.010927 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 2 00:28:00.010937 kernel: kvm-guest: setup PV IPIs Jul 2 00:28:00.011088 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:28:00.011102 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:28:00.011113 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 2 00:28:00.011123 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 00:28:00.011135 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 00:28:00.011153 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 00:28:00.011166 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:28:00.011179 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:28:00.011193 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:28:00.011206 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:28:00.011219 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 00:28:00.011232 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 00:28:00.011245 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:28:00.011263 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:28:00.011276 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 2 00:28:00.011291 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 2 00:28:00.011304 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 2 00:28:00.011318 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:28:00.011331 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:28:00.011344 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:28:00.011357 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:28:00.011369 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 00:28:00.011383 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:28:00.011393 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:28:00.011404 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:28:00.011414 kernel: SELinux: Initializing. Jul 2 00:28:00.011425 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:28:00.011436 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:28:00.011447 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 00:28:00.011457 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:28:00.011468 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:28:00.011482 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:28:00.011493 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 00:28:00.011504 kernel: ... version: 0 Jul 2 00:28:00.011514 kernel: ... bit width: 48 Jul 2 00:28:00.011525 kernel: ... generic registers: 6 Jul 2 00:28:00.011535 kernel: ... value mask: 0000ffffffffffff Jul 2 00:28:00.011546 kernel: ... max period: 00007fffffffffff Jul 2 00:28:00.011557 kernel: ... fixed-purpose events: 0 Jul 2 00:28:00.011567 kernel: ... event mask: 000000000000003f Jul 2 00:28:00.011581 kernel: signal: max sigframe size: 1776 Jul 2 00:28:00.011592 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:28:00.011603 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:28:00.011613 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:28:00.011624 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:28:00.011634 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 00:28:00.011645 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:28:00.011655 kernel: smpboot: Max logical packages: 1 Jul 2 00:28:00.011666 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 2 00:28:00.011680 kernel: devtmpfs: initialized Jul 2 00:28:00.011690 kernel: x86/mm: Memory block size: 128MB Jul 2 00:28:00.011701 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 00:28:00.011712 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 00:28:00.011722 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 00:28:00.011733 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 00:28:00.011744 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 00:28:00.011755 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:28:00.011765 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:28:00.011779 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:28:00.011790 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:28:00.011800 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:28:00.011811 kernel: audit: type=2000 audit(1719880079.180:1): state=initialized audit_enabled=0 res=1 Jul 2 00:28:00.011822 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:28:00.011832 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:28:00.011843 kernel: cpuidle: using governor menu Jul 2 00:28:00.011853 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:28:00.011864 kernel: dca service started, version 1.12.1 Jul 2 00:28:00.011878 kernel: PCI: Using configuration type 1 for base access Jul 2 00:28:00.011888 kernel: PCI: Using configuration type 1 for extended access Jul 2 00:28:00.011899 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:28:00.011909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:28:00.011920 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:28:00.011931 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:28:00.011941 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:28:00.011974 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:28:00.011985 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:28:00.011999 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:28:00.012010 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:28:00.012020 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:28:00.012031 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:28:00.012041 kernel: ACPI: Interpreter enabled Jul 2 00:28:00.012052 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:28:00.012063 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:28:00.012076 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:28:00.012089 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:28:00.012106 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:28:00.012120 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:28:00.012358 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:28:00.012377 kernel: acpiphp: Slot [3] registered Jul 2 00:28:00.012388 kernel: acpiphp: Slot [4] registered Jul 2 00:28:00.012398 kernel: acpiphp: Slot [5] registered Jul 2 00:28:00.012409 kernel: acpiphp: Slot [6] registered Jul 2 00:28:00.012420 kernel: acpiphp: Slot [7] registered Jul 2 00:28:00.012435 kernel: acpiphp: Slot [8] registered Jul 2 00:28:00.012445 kernel: acpiphp: Slot [9] registered Jul 2 00:28:00.012455 kernel: acpiphp: Slot [10] registered Jul 2 00:28:00.012466 kernel: acpiphp: Slot [11] registered Jul 2 00:28:00.012476 kernel: acpiphp: Slot [12] registered Jul 2 00:28:00.012487 kernel: acpiphp: Slot [13] registered Jul 2 00:28:00.012497 kernel: acpiphp: Slot [14] registered Jul 2 00:28:00.012507 kernel: acpiphp: Slot [15] registered Jul 2 00:28:00.012518 kernel: acpiphp: Slot [16] registered Jul 2 00:28:00.012531 kernel: acpiphp: Slot [17] registered Jul 2 00:28:00.012542 kernel: acpiphp: Slot [18] registered Jul 2 00:28:00.012552 kernel: acpiphp: Slot [19] registered Jul 2 00:28:00.012563 kernel: acpiphp: Slot [20] registered Jul 2 00:28:00.012573 kernel: acpiphp: Slot [21] registered Jul 2 00:28:00.012584 kernel: acpiphp: Slot [22] registered Jul 2 00:28:00.012594 kernel: acpiphp: Slot [23] registered Jul 2 00:28:00.012604 kernel: acpiphp: Slot [24] registered Jul 2 00:28:00.012615 kernel: acpiphp: Slot [25] registered Jul 2 00:28:00.012625 kernel: acpiphp: Slot [26] registered Jul 2 00:28:00.012639 kernel: acpiphp: Slot [27] registered Jul 2 00:28:00.012649 kernel: acpiphp: Slot [28] registered Jul 2 00:28:00.012659 kernel: acpiphp: Slot [29] registered Jul 2 00:28:00.012670 kernel: acpiphp: Slot [30] registered Jul 2 00:28:00.012680 kernel: acpiphp: Slot [31] registered Jul 2 00:28:00.012691 kernel: PCI host bridge to bus 0000:00 Jul 2 00:28:00.012858 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:28:00.013038 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:28:00.013187 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:28:00.013329 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 00:28:00.013472 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 00:28:00.013615 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:28:00.013793 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:28:00.013988 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:28:00.014163 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:28:00.014331 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 00:28:00.014494 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:28:00.014650 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:28:00.014806 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:28:00.014989 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:28:00.015164 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:28:00.015372 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:28:00.015530 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:28:00.015697 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 00:28:00.015855 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 00:28:00.016053 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 00:28:00.016243 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 00:28:00.016403 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 00:28:00.016571 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:28:00.016747 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:28:00.016907 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 00:28:00.017127 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 00:28:00.017289 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 00:28:00.017455 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:28:00.017616 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:28:00.017782 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 00:28:00.017943 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 00:28:00.018155 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:28:00.018317 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 00:28:00.018477 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 00:28:00.018638 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 00:28:00.018800 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 00:28:00.018820 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:28:00.018832 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:28:00.018843 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:28:00.018854 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:28:00.018864 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:28:00.018875 kernel: iommu: Default domain type: Translated Jul 2 00:28:00.018886 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:28:00.018896 kernel: efivars: Registered efivars operations Jul 2 00:28:00.018907 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:28:00.018921 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:28:00.018931 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 00:28:00.018942 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 00:28:00.019012 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 00:28:00.019025 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 00:28:00.019184 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:28:00.019339 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:28:00.019489 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:28:00.019509 kernel: vgaarb: loaded Jul 2 00:28:00.019520 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:28:00.019531 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:28:00.019542 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:28:00.019552 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:28:00.019564 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:28:00.019574 kernel: pnp: PnP ACPI init Jul 2 00:28:00.019735 kernel: pnp 00:02: [dma 2] Jul 2 00:28:00.019752 kernel: pnp: PnP ACPI: found 6 devices Jul 2 00:28:00.019768 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:28:00.019779 kernel: NET: Registered PF_INET protocol family Jul 2 00:28:00.019790 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:28:00.019801 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:28:00.019812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:28:00.019823 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:28:00.019834 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:28:00.019845 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:28:00.019860 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:28:00.019871 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:28:00.019882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:28:00.019892 kernel: NET: Registered PF_XDP protocol family Jul 2 00:28:00.020074 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 00:28:00.020229 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 00:28:00.020371 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:28:00.020511 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:28:00.020656 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:28:00.020796 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 00:28:00.020940 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 00:28:00.021127 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:28:00.021284 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:28:00.021300 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:28:00.021312 kernel: Initialise system trusted keyrings Jul 2 00:28:00.021324 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:28:00.021339 kernel: Key type asymmetric registered Jul 2 00:28:00.021351 kernel: Asymmetric key parser 'x509' registered Jul 2 00:28:00.021362 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:28:00.021373 kernel: io scheduler mq-deadline registered Jul 2 00:28:00.021384 kernel: io scheduler kyber registered Jul 2 00:28:00.021396 kernel: io scheduler bfq registered Jul 2 00:28:00.021407 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:28:00.021419 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:28:00.021430 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 00:28:00.021441 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:28:00.021454 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:28:00.021465 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:28:00.021476 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:28:00.021507 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:28:00.021521 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:28:00.021532 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:28:00.021693 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 00:28:00.021838 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 00:28:00.022035 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T00:27:59 UTC (1719880079) Jul 2 00:28:00.022192 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 00:28:00.022208 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:28:00.022220 kernel: efifb: probing for efifb Jul 2 00:28:00.022231 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 2 00:28:00.022243 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 2 00:28:00.022254 kernel: efifb: scrolling: redraw Jul 2 00:28:00.022265 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 2 00:28:00.022281 kernel: Console: switching to colour frame buffer device 100x37 Jul 2 00:28:00.022292 kernel: fb0: EFI VGA frame buffer device Jul 2 00:28:00.022303 kernel: pstore: Using crash dump compression: deflate Jul 2 00:28:00.022315 kernel: pstore: Registered efi_pstore as persistent store backend Jul 2 00:28:00.022326 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:28:00.022336 kernel: Segment Routing with IPv6 Jul 2 00:28:00.022344 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:28:00.022352 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:28:00.022360 kernel: Key type dns_resolver registered Jul 2 00:28:00.022368 kernel: IPI shorthand broadcast: enabled Jul 2 00:28:00.022379 kernel: sched_clock: Marking stable (792001926, 392325628)->(1245648455, -61320901) Jul 2 00:28:00.022389 kernel: registered taskstats version 1 Jul 2 00:28:00.022397 kernel: Loading compiled-in X.509 certificates Jul 2 00:28:00.022407 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:28:00.022415 kernel: Key type .fscrypt registered Jul 2 00:28:00.022425 kernel: Key type fscrypt-provisioning registered Jul 2 00:28:00.022433 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:28:00.022441 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:28:00.022449 kernel: ima: No architecture policies found Jul 2 00:28:00.022457 kernel: clk: Disabling unused clocks Jul 2 00:28:00.022465 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:28:00.022473 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:28:00.022481 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:28:00.022488 kernel: Run /init as init process Jul 2 00:28:00.022499 kernel: with arguments: Jul 2 00:28:00.022506 kernel: /init Jul 2 00:28:00.022514 kernel: with environment: Jul 2 00:28:00.022521 kernel: HOME=/ Jul 2 00:28:00.022529 kernel: TERM=linux Jul 2 00:28:00.022537 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:28:00.022547 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:28:00.022560 systemd[1]: Detected virtualization kvm. Jul 2 00:28:00.022569 systemd[1]: Detected architecture x86-64. Jul 2 00:28:00.022577 systemd[1]: Running in initrd. Jul 2 00:28:00.022586 systemd[1]: No hostname configured, using default hostname. Jul 2 00:28:00.022594 systemd[1]: Hostname set to . Jul 2 00:28:00.022602 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:28:00.022610 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:28:00.022619 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:28:00.022630 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:28:00.022639 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:28:00.022647 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:28:00.022656 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:28:00.022665 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:28:00.022675 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:28:00.022684 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:28:00.022694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:28:00.022702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:28:00.022711 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:28:00.022719 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:28:00.022727 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:28:00.022736 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:28:00.022744 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:28:00.022752 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:28:00.022761 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:28:00.022771 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:28:00.022780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:28:00.022788 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:28:00.022797 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:28:00.022805 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:28:00.022813 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:28:00.022822 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:28:00.022830 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:28:00.022841 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:28:00.022849 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:28:00.022857 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:28:00.022866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:28:00.022894 systemd-journald[193]: Collecting audit messages is disabled. Jul 2 00:28:00.022917 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:28:00.022926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:28:00.022935 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:28:00.022944 systemd-journald[193]: Journal started Jul 2 00:28:00.023037 systemd-journald[193]: Runtime Journal (/run/log/journal/3e3ebc4186ee4b52b5015d92ecb30d66) is 6.0M, max 48.3M, 42.3M free. Jul 2 00:28:00.027996 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:28:00.030662 systemd-modules-load[194]: Inserted module 'overlay' Jul 2 00:28:00.034338 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:28:00.033679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:28:00.053174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:28:00.055706 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:28:00.055977 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:28:00.057822 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:28:00.074306 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:28:00.094514 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:28:00.094540 kernel: Bridge firewalling registered Jul 2 00:28:00.088489 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:28:00.088912 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:28:00.091876 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:28:00.094483 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 2 00:28:00.096186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:28:00.098206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:28:00.117267 dracut-cmdline[221]: dracut-dracut-053 Jul 2 00:28:00.117267 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:28:00.136401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:28:00.144122 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:28:00.173593 systemd-resolved[262]: Positive Trust Anchors: Jul 2 00:28:00.173610 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:28:00.173648 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:28:00.176257 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 2 00:28:00.177291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:28:00.184306 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:28:00.217996 kernel: SCSI subsystem initialized Jul 2 00:28:00.229991 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:28:00.242992 kernel: iscsi: registered transport (tcp) Jul 2 00:28:00.273993 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:28:00.274029 kernel: QLogic iSCSI HBA Driver Jul 2 00:28:00.324655 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:28:00.333139 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:28:00.384677 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:28:00.384742 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:28:00.384754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:28:00.429984 kernel: raid6: avx2x4 gen() 28215 MB/s Jul 2 00:28:00.458978 kernel: raid6: avx2x2 gen() 30618 MB/s Jul 2 00:28:00.476083 kernel: raid6: avx2x1 gen() 25120 MB/s Jul 2 00:28:00.476109 kernel: raid6: using algorithm avx2x2 gen() 30618 MB/s Jul 2 00:28:00.506418 kernel: raid6: .... xor() 19218 MB/s, rmw enabled Jul 2 00:28:00.506456 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:28:00.558984 kernel: xor: automatically using best checksumming function avx Jul 2 00:28:00.739988 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:28:00.752634 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:28:00.765112 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:28:00.776507 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jul 2 00:28:00.781239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:28:00.787517 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:28:00.804537 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jul 2 00:28:00.836491 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:28:00.848072 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:28:00.916416 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:28:00.961989 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:28:00.964003 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 00:28:00.979167 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:28:00.979330 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:28:00.979343 kernel: GPT:9289727 != 19775487 Jul 2 00:28:00.979353 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:28:00.979363 kernel: GPT:9289727 != 19775487 Jul 2 00:28:00.979373 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:28:00.979383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:28:00.965136 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:28:00.986968 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:28:00.988660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:28:01.005826 kernel: AES CTR mode by8 optimization enabled Jul 2 00:28:00.988824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:28:01.007361 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:28:01.012182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:28:01.013746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:28:01.017714 kernel: libata version 3.00 loaded. Jul 2 00:28:01.017751 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:28:01.025968 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Jul 2 00:28:01.025992 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:28:01.034262 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (460) Jul 2 00:28:01.034277 kernel: scsi host0: ata_piix Jul 2 00:28:01.034440 kernel: scsi host1: ata_piix Jul 2 00:28:01.034589 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 00:28:01.034601 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 00:28:01.027354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:28:01.030811 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:28:01.046067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:28:01.053270 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:28:01.071033 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:28:01.078259 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:28:01.084403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:28:01.087038 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:28:01.090405 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:28:01.092858 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:28:01.095203 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:28:01.106063 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:28:01.109147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:28:01.112626 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:28:01.125686 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:28:01.127500 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:28:01.191073 kernel: ata2: found unknown device (class 0) Jul 2 00:28:01.192023 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 00:28:01.195000 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 00:28:01.262516 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 00:28:01.278764 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:28:01.278782 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 00:28:01.391836 disk-uuid[536]: Primary Header is updated. Jul 2 00:28:01.391836 disk-uuid[536]: Secondary Entries is updated. Jul 2 00:28:01.391836 disk-uuid[536]: Secondary Header is updated. Jul 2 00:28:01.397024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:28:01.420965 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:28:02.447993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:28:02.448099 disk-uuid[567]: The operation has completed successfully. Jul 2 00:28:02.481628 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:28:02.481779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:28:02.507347 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:28:02.513369 sh[580]: Success Jul 2 00:28:02.531020 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 00:28:02.569800 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:28:02.582625 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:28:02.585736 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:28:02.616540 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:28:02.616604 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:28:02.616619 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:28:02.617567 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:28:02.618334 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:28:02.623521 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:28:02.624309 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:28:02.636180 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:28:02.637349 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:28:02.647915 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:28:02.647978 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:28:02.647990 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:28:02.652986 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:28:02.664642 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:28:02.666820 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:28:02.681305 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:28:02.689121 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:28:02.742105 ignition[684]: Ignition 2.18.0 Jul 2 00:28:02.742121 ignition[684]: Stage: fetch-offline Jul 2 00:28:02.742178 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:28:02.742192 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:28:02.742410 ignition[684]: parsed url from cmdline: "" Jul 2 00:28:02.742414 ignition[684]: no config URL provided Jul 2 00:28:02.742419 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:28:02.742428 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:28:02.742457 ignition[684]: op(1): [started] loading QEMU firmware config module Jul 2 00:28:02.742464 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:28:02.750010 ignition[684]: op(1): [finished] loading QEMU firmware config module Jul 2 00:28:02.768523 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:28:02.777121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:28:02.796241 ignition[684]: parsing config with SHA512: fffc6401f4010be3ecfdc23841a51083bb17671bda3d3ed5aeb113ed9bf140aaf7a7fdda943709993ae426dc1030b455013d5be170d83605237173d285e56234 Jul 2 00:28:02.799914 unknown[684]: fetched base config from "system" Jul 2 00:28:02.799927 unknown[684]: fetched user config from "qemu" Jul 2 00:28:02.800288 ignition[684]: fetch-offline: fetch-offline passed Jul 2 00:28:02.800341 ignition[684]: Ignition finished successfully Jul 2 00:28:02.803504 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:28:02.804876 systemd-networkd[771]: lo: Link UP Jul 2 00:28:02.804881 systemd-networkd[771]: lo: Gained carrier Jul 2 00:28:02.806487 systemd-networkd[771]: Enumeration completed Jul 2 00:28:02.806604 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:28:02.806870 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:28:02.806874 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:28:02.808198 systemd[1]: Reached target network.target - Network. Jul 2 00:28:02.808706 systemd-networkd[771]: eth0: Link UP Jul 2 00:28:02.808713 systemd-networkd[771]: eth0: Gained carrier Jul 2 00:28:02.808730 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:28:02.810008 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:28:02.821127 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:28:02.829048 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:28:02.835609 ignition[774]: Ignition 2.18.0 Jul 2 00:28:02.835626 ignition[774]: Stage: kargs Jul 2 00:28:02.835835 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:28:02.835850 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:28:02.837026 ignition[774]: kargs: kargs passed Jul 2 00:28:02.837087 ignition[774]: Ignition finished successfully Jul 2 00:28:02.841258 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:28:02.853134 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:28:02.867580 ignition[784]: Ignition 2.18.0 Jul 2 00:28:02.867592 ignition[784]: Stage: disks Jul 2 00:28:02.867781 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:28:02.867793 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:28:02.871323 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:28:02.868753 ignition[784]: disks: disks passed Jul 2 00:28:02.873583 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:28:02.868804 ignition[784]: Ignition finished successfully Jul 2 00:28:02.875750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:28:02.877935 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:28:02.880395 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:28:02.880459 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:28:02.892304 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:28:02.908180 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:28:02.918416 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:28:02.934182 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:28:03.043004 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:28:03.043154 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:28:03.044087 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:28:03.054150 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:28:03.056255 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:28:03.057759 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:28:03.063199 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Jul 2 00:28:03.063221 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:28:03.057800 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:28:03.069828 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:28:03.069850 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:28:03.069861 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:28:03.057824 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:28:03.064919 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:28:03.070946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:28:03.084074 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:28:03.116103 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:28:03.119717 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:28:03.123294 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:28:03.127674 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:28:03.204221 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:28:03.213112 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:28:03.214935 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:28:03.222071 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:28:03.243262 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:28:03.245313 ignition[917]: INFO : Ignition 2.18.0 Jul 2 00:28:03.245313 ignition[917]: INFO : Stage: mount Jul 2 00:28:03.245313 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:28:03.245313 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:28:03.245313 ignition[917]: INFO : mount: mount passed Jul 2 00:28:03.249793 ignition[917]: INFO : Ignition finished successfully Jul 2 00:28:03.250769 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:28:03.258064 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:28:03.616013 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:28:03.630153 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:28:03.637815 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Jul 2 00:28:03.637844 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:28:03.637855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:28:03.639384 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:28:03.641988 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:28:03.643616 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:28:03.667003 ignition[948]: INFO : Ignition 2.18.0 Jul 2 00:28:03.667003 ignition[948]: INFO : Stage: files Jul 2 00:28:03.668869 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:28:03.668869 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:28:03.671379 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:28:03.672656 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:28:03.672656 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:28:03.676257 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:28:03.677775 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:28:03.679702 unknown[948]: wrote ssh authorized keys file for user: core Jul 2 00:28:03.680893 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:28:03.682906 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:28:03.684759 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:28:03.709022 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:28:03.798202 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:28:03.798202 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:28:03.802029 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:28:03.998130 systemd-networkd[771]: eth0: Gained IPv6LL Jul 2 00:28:04.181491 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:28:04.260747 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:28:04.260747 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:28:04.264764 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:28:04.633496 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:28:04.987066 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:28:04.987066 ignition[948]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 00:28:04.991361 ignition[948]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:28:04.991361 ignition[948]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:28:04.991361 ignition[948]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 00:28:04.991361 ignition[948]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 00:28:04.991361 ignition[948]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:28:04.991361 ignition[948]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:28:04.991361 ignition[948]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 00:28:04.991361 ignition[948]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:28:05.008807 ignition[948]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:28:05.013499 ignition[948]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:28:05.015140 ignition[948]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:28:05.015140 ignition[948]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:28:05.015140 ignition[948]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:28:05.015140 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:28:05.015140 ignition[948]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:28:05.015140 ignition[948]: INFO : files: files passed Jul 2 00:28:05.015140 ignition[948]: INFO : Ignition finished successfully Jul 2 00:28:05.016493 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:28:05.026109 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:28:05.027995 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:28:05.029769 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:28:05.029889 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:28:05.038895 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:28:05.041733 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:28:05.041733 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:28:05.045026 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:28:05.044341 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:28:05.046480 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:28:05.056094 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:28:05.079933 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:28:05.080075 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:28:05.081274 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:28:05.083522 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:28:05.085481 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:28:05.088807 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:28:05.107779 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:28:05.120136 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:28:05.129228 systemd[1]: Stopped target network.target - Network. Jul 2 00:28:05.130266 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:28:05.132283 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:28:05.134706 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:28:05.136758 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:28:05.136884 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:28:05.139277 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:28:05.141005 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:28:05.143325 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:28:05.145591 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:28:05.147585 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:28:05.149868 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:28:05.152051 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:28:05.154386 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:28:05.156492 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:28:05.158733 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:28:05.160590 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:28:05.160716 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:28:05.163141 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:28:05.164675 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:28:05.166860 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:28:05.166997 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:28:05.169283 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:28:05.169392 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:28:05.171916 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:28:05.172044 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:28:05.174017 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:28:05.175828 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:28:05.181036 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:28:05.182771 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:28:05.184593 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:28:05.186708 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:28:05.186816 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:28:05.189144 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:28:05.189234 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:28:05.191040 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:28:05.191164 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:28:05.193133 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:28:05.193243 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:28:05.208093 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:28:05.209020 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:28:05.209138 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:28:05.212477 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:28:05.214581 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:28:05.216645 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:28:05.220606 ignition[1002]: INFO : Ignition 2.18.0 Jul 2 00:28:05.220606 ignition[1002]: INFO : Stage: umount Jul 2 00:28:05.218075 systemd-networkd[771]: eth0: DHCPv6 lease lost Jul 2 00:28:05.226137 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:28:05.226137 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:28:05.226137 ignition[1002]: INFO : umount: umount passed Jul 2 00:28:05.226137 ignition[1002]: INFO : Ignition finished successfully Jul 2 00:28:05.219307 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:28:05.219454 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:28:05.221848 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:28:05.221980 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:28:05.227289 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:28:05.227412 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:28:05.231850 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:28:05.232030 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:28:05.234095 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:28:05.234200 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:28:05.237224 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:28:05.237347 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:28:05.241026 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:28:05.241070 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:28:05.242074 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:28:05.242123 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:28:05.244431 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:28:05.244481 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:28:05.244758 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:28:05.244799 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:28:05.245273 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:28:05.245316 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:28:05.259050 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:28:05.260337 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:28:05.260403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:28:05.264096 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:28:05.264149 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:28:05.266516 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:28:05.266567 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:28:05.268802 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:28:05.268871 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:28:05.271153 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:28:05.286726 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:28:05.287767 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:28:05.291235 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:28:05.294202 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:28:05.295521 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:28:05.298806 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:28:05.298886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:28:05.302060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:28:05.302110 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:28:05.305189 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:28:05.305248 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:28:05.308418 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:28:05.308470 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:28:05.311652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:28:05.311707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:28:05.336311 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:28:05.338723 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:28:05.338831 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:28:05.341193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:28:05.341264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:28:05.365123 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:28:05.365242 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:28:05.527898 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:28:05.528054 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:28:05.530119 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:28:05.531806 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:28:05.531871 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:28:05.548104 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:28:05.555319 systemd[1]: Switching root. Jul 2 00:28:05.584091 systemd-journald[193]: Journal stopped Jul 2 00:28:07.511774 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 2 00:28:07.511844 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:28:07.511867 kernel: SELinux: policy capability open_perms=1 Jul 2 00:28:07.511878 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:28:07.511890 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:28:07.511901 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:28:07.511913 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:28:07.511925 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:28:07.511936 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:28:07.511959 kernel: audit: type=1403 audit(1719880086.595:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:28:07.511976 systemd[1]: Successfully loaded SELinux policy in 39.700ms. Jul 2 00:28:07.511997 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.311ms. Jul 2 00:28:07.512012 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:28:07.512025 systemd[1]: Detected virtualization kvm. Jul 2 00:28:07.512037 systemd[1]: Detected architecture x86-64. Jul 2 00:28:07.512049 systemd[1]: Detected first boot. Jul 2 00:28:07.512061 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:28:07.512073 zram_generator::config[1045]: No configuration found. Jul 2 00:28:07.512090 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:28:07.512105 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:28:07.512121 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:28:07.512133 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:28:07.512146 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:28:07.512158 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:28:07.512170 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:28:07.512182 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:28:07.512195 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:28:07.512207 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:28:07.512221 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:28:07.512233 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:28:07.512245 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:28:07.512259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:28:07.512272 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:28:07.512288 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:28:07.512307 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:28:07.512319 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:28:07.512331 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:28:07.512345 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:28:07.512357 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:28:07.512369 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:28:07.512382 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:28:07.512394 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:28:07.512406 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:28:07.512419 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:28:07.512433 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:28:07.512445 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:28:07.512457 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:28:07.512469 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:28:07.512481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:28:07.512493 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:28:07.512506 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:28:07.512518 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:28:07.512530 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:28:07.512542 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:28:07.512557 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:28:07.512570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:28:07.512582 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:28:07.512594 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:28:07.512606 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:28:07.512619 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:28:07.512631 systemd[1]: Reached target machines.target - Containers. Jul 2 00:28:07.512643 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:28:07.512658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:28:07.512671 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:28:07.512683 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:28:07.512695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:28:07.512707 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:28:07.512720 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:28:07.512731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:28:07.512743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:28:07.512758 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:28:07.512770 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:28:07.512782 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:28:07.512801 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:28:07.512814 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:28:07.512827 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:28:07.512839 kernel: loop: module loaded Jul 2 00:28:07.512851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:28:07.512862 kernel: fuse: init (API version 7.39) Jul 2 00:28:07.512894 systemd-journald[1107]: Collecting audit messages is disabled. Jul 2 00:28:07.512916 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:28:07.512929 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:28:07.512942 systemd-journald[1107]: Journal started Jul 2 00:28:07.512986 systemd-journald[1107]: Runtime Journal (/run/log/journal/3e3ebc4186ee4b52b5015d92ecb30d66) is 6.0M, max 48.3M, 42.3M free. Jul 2 00:28:07.200577 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:28:07.216102 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:28:07.216536 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:28:07.517979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:28:07.522743 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:28:07.522784 systemd[1]: Stopped verity-setup.service. Jul 2 00:28:07.522810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:28:07.536847 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:28:07.537692 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:28:07.538896 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:28:07.549495 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:28:07.551322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:28:07.553049 kernel: ACPI: bus type drm_connector registered Jul 2 00:28:07.554027 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:28:07.555395 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:28:07.556710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:28:07.558504 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:28:07.558752 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:28:07.560495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:28:07.560662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:28:07.562336 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:28:07.562623 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:28:07.564127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:28:07.564296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:28:07.565981 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:28:07.566210 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:28:07.567744 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:28:07.567912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:28:07.569433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:28:07.570972 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:28:07.572726 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:28:07.587626 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:28:07.598085 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:28:07.604308 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:28:07.605596 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:28:07.605638 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:28:07.608186 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:28:07.612223 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:28:07.616577 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:28:07.619511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:28:07.621446 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:28:07.624770 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:28:07.626745 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:28:07.630272 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:28:07.632020 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:28:07.637834 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:28:07.641152 systemd-journald[1107]: Time spent on flushing to /var/log/journal/3e3ebc4186ee4b52b5015d92ecb30d66 is 50.795ms for 987 entries. Jul 2 00:28:07.641152 systemd-journald[1107]: System Journal (/var/log/journal/3e3ebc4186ee4b52b5015d92ecb30d66) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:28:07.748539 systemd-journald[1107]: Received client request to flush runtime journal. Jul 2 00:28:07.748753 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:28:07.748801 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:28:07.749402 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:28:07.642460 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:28:07.645778 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:28:07.647363 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:28:07.648797 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:28:07.650117 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:28:07.651713 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:28:07.657471 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:28:07.659537 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:28:07.677367 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:28:07.682470 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:28:07.684368 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:28:07.696289 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:28:07.698335 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:28:07.737486 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:28:07.752919 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:28:07.754837 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:28:07.762198 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:28:07.764111 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:28:07.775972 kernel: loop1: detected capacity change from 0 to 80568 Jul 2 00:28:07.778175 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 2 00:28:07.778199 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 2 00:28:07.788089 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:28:07.820723 kernel: loop2: detected capacity change from 0 to 209816 Jul 2 00:28:07.865199 kernel: loop3: detected capacity change from 0 to 139904 Jul 2 00:28:07.879006 kernel: loop4: detected capacity change from 0 to 80568 Jul 2 00:28:07.888196 kernel: loop5: detected capacity change from 0 to 209816 Jul 2 00:28:07.896911 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:28:07.897497 (sd-merge)[1184]: Merged extensions into '/usr'. Jul 2 00:28:07.903167 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:28:07.903184 systemd[1]: Reloading... Jul 2 00:28:07.981024 zram_generator::config[1209]: No configuration found. Jul 2 00:28:08.006585 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:28:08.096439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:28:08.144989 systemd[1]: Reloading finished in 241 ms. Jul 2 00:28:08.175181 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:28:08.176777 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:28:08.193228 systemd[1]: Starting ensure-sysext.service... Jul 2 00:28:08.195472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:28:08.204114 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:28:08.204134 systemd[1]: Reloading... Jul 2 00:28:08.233079 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:28:08.233463 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:28:08.234438 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:28:08.234727 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 2 00:28:08.234809 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 2 00:28:08.244836 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:28:08.246150 systemd-tmpfiles[1248]: Skipping /boot Jul 2 00:28:08.249978 zram_generator::config[1273]: No configuration found. Jul 2 00:28:08.257530 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:28:08.257619 systemd-tmpfiles[1248]: Skipping /boot Jul 2 00:28:08.364515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:28:08.413010 systemd[1]: Reloading finished in 208 ms. Jul 2 00:28:08.436182 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:28:08.451388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:28:08.460168 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:28:08.462826 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:28:08.465177 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:28:08.470339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:28:08.476202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:28:08.482262 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:28:08.485845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:28:08.486143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:28:08.488187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:28:08.493452 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:28:08.498535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:28:08.499847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:28:08.503206 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:28:08.504685 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:28:08.505691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:28:08.506265 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:28:08.507689 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Jul 2 00:28:08.508904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:28:08.509127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:28:08.511383 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:28:08.511550 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:28:08.513831 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:28:08.521776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:28:08.522037 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:28:08.526378 augenrules[1341]: No rules Jul 2 00:28:08.531446 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:28:08.533585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:28:08.536471 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:28:08.543916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:28:08.544324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:28:08.551201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:28:08.555397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:28:08.560055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:28:08.562533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:28:08.573159 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:28:08.574273 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:28:08.575171 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:28:08.577584 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:28:08.579544 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:28:08.581659 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:28:08.583555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:28:08.583868 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:28:08.586172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:28:08.586448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:28:08.588518 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:28:08.588823 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:28:08.597974 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1362) Jul 2 00:28:08.605832 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:28:08.606743 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:28:08.607163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:28:08.618187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:28:08.623263 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:28:08.627006 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1361) Jul 2 00:28:08.629134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:28:08.633936 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:28:08.635940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:28:08.636019 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:28:08.636036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:28:08.636690 systemd[1]: Finished ensure-sysext.service. Jul 2 00:28:08.639081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:28:08.639315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:28:08.641783 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:28:08.641999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:28:08.643657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:28:08.643834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:28:08.645824 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:28:08.646002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:28:08.665340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:28:08.665393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:28:08.665494 systemd-resolved[1316]: Positive Trust Anchors: Jul 2 00:28:08.665505 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:28:08.665535 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:28:08.671667 systemd-resolved[1316]: Defaulting to hostname 'linux'. Jul 2 00:28:08.673596 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:28:08.674985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:28:08.678132 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:28:08.689832 systemd-networkd[1370]: lo: Link UP Jul 2 00:28:08.689846 systemd-networkd[1370]: lo: Gained carrier Jul 2 00:28:08.692987 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:28:08.691596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:28:08.692168 systemd-networkd[1370]: Enumeration completed Jul 2 00:28:08.692799 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:28:08.692803 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:28:08.693256 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:28:08.694398 systemd-networkd[1370]: eth0: Link UP Jul 2 00:28:08.694411 systemd-networkd[1370]: eth0: Gained carrier Jul 2 00:28:08.694424 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:28:08.696375 systemd[1]: Reached target network.target - Network. Jul 2 00:28:08.705098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:28:08.705722 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:28:08.705744 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 00:28:08.708251 systemd-networkd[1370]: eth0: DHCPv4 address 10.0.0.153/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:28:08.716218 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:28:08.725575 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:28:08.744976 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:28:08.757205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:28:08.768153 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:28:09.543630 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:28:09.543697 systemd-timesyncd[1398]: Initial clock synchronization to Tue 2024-07-02 00:28:09.543507 UTC. Jul 2 00:28:09.543771 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:28:09.544009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:28:09.545536 systemd-resolved[1316]: Clock change detected. Flushing caches. Jul 2 00:28:09.547631 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:28:09.549301 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:28:09.563634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:28:09.622921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:28:09.657924 kernel: kvm_amd: TSC scaling supported Jul 2 00:28:09.658005 kernel: kvm_amd: Nested Virtualization enabled Jul 2 00:28:09.658022 kernel: kvm_amd: Nested Paging enabled Jul 2 00:28:09.658919 kernel: kvm_amd: LBR virtualization supported Jul 2 00:28:09.658938 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 2 00:28:09.659931 kernel: kvm_amd: Virtual GIF supported Jul 2 00:28:09.680515 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:28:09.708008 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:28:09.724791 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:28:09.734146 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:28:09.765418 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:28:09.767022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:28:09.768163 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:28:09.769355 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:28:09.770639 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:28:09.772143 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:28:09.773377 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:28:09.774729 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:28:09.776075 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:28:09.776110 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:28:09.777154 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:28:09.778946 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:28:09.781755 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:28:09.798724 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:28:09.801078 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:28:09.802685 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:28:09.803970 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:28:09.804984 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:28:09.805991 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:28:09.806018 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:28:09.807003 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:28:09.809182 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:28:09.811537 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:28:09.813559 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:28:09.817390 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:28:09.819928 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:28:09.820971 jq[1428]: false Jul 2 00:28:09.822763 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:28:09.826231 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:28:09.829216 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:28:09.832770 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:28:09.836815 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:28:09.838303 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:28:09.838838 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:28:09.839614 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:28:09.843583 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:28:09.846211 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:28:09.848579 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:28:09.848807 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:28:09.849623 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:28:09.849829 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:28:09.853486 extend-filesystems[1429]: Found loop3 Jul 2 00:28:09.853486 extend-filesystems[1429]: Found loop4 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found loop5 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found sr0 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found vda Jul 2 00:28:09.859135 extend-filesystems[1429]: Found vda1 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found vda2 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found vda3 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found usr Jul 2 00:28:09.859135 extend-filesystems[1429]: Found vda4 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found vda6 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found vda7 Jul 2 00:28:09.859135 extend-filesystems[1429]: Found vda9 Jul 2 00:28:09.859135 extend-filesystems[1429]: Checking size of /dev/vda9 Jul 2 00:28:09.893578 jq[1439]: true Jul 2 00:28:09.864903 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:28:09.874327 dbus-daemon[1427]: [system] SELinux support is enabled Jul 2 00:28:09.894166 extend-filesystems[1429]: Resized partition /dev/vda9 Jul 2 00:28:09.865126 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:28:09.874550 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:28:09.896005 jq[1450]: true Jul 2 00:28:09.878693 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:28:09.896262 tar[1445]: linux-amd64/helm Jul 2 00:28:09.878720 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:28:09.879627 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:28:09.879641 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:28:09.881382 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:28:09.899182 update_engine[1438]: I0702 00:28:09.899125 1438 main.cc:92] Flatcar Update Engine starting Jul 2 00:28:09.903461 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:28:09.906561 extend-filesystems[1473]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:28:09.907708 update_engine[1438]: I0702 00:28:09.903538 1438 update_check_scheduler.cc:74] Next update check in 10m4s Jul 2 00:28:09.916505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1363) Jul 2 00:28:09.916548 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:28:09.919910 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:28:09.963540 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:28:09.963572 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:28:09.964778 systemd-logind[1436]: New seat seat0. Jul 2 00:28:09.971500 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:28:09.975777 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:28:09.977872 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:28:10.004852 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:28:10.004852 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:28:10.004852 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:28:10.009450 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Jul 2 00:28:10.007895 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:28:10.008250 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:28:10.016288 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:28:10.017486 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:28:10.020315 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:28:10.115624 containerd[1454]: time="2024-07-02T00:28:10.115014580Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:28:10.119580 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:28:10.142621 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:28:10.143783 containerd[1454]: time="2024-07-02T00:28:10.143729932Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:28:10.143783 containerd[1454]: time="2024-07-02T00:28:10.143787040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:28:10.145347 containerd[1454]: time="2024-07-02T00:28:10.145308594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:28:10.145347 containerd[1454]: time="2024-07-02T00:28:10.145341055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:28:10.145683 containerd[1454]: time="2024-07-02T00:28:10.145654202Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:28:10.145683 containerd[1454]: time="2024-07-02T00:28:10.145677216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:28:10.145803 containerd[1454]: time="2024-07-02T00:28:10.145785399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:28:10.145878 containerd[1454]: time="2024-07-02T00:28:10.145860219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:28:10.145908 containerd[1454]: time="2024-07-02T00:28:10.145878383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:28:10.145994 containerd[1454]: time="2024-07-02T00:28:10.145977870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:28:10.146277 containerd[1454]: time="2024-07-02T00:28:10.146239230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:28:10.146302 containerd[1454]: time="2024-07-02T00:28:10.146273494Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:28:10.146302 containerd[1454]: time="2024-07-02T00:28:10.146286869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:28:10.146461 containerd[1454]: time="2024-07-02T00:28:10.146433254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:28:10.146461 containerd[1454]: time="2024-07-02T00:28:10.146454253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:28:10.146559 containerd[1454]: time="2024-07-02T00:28:10.146533933Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:28:10.146559 containerd[1454]: time="2024-07-02T00:28:10.146552157Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:28:10.157739 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:28:10.166008 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:28:10.166230 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:28:10.179447 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:28:10.195416 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:28:10.204885 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:28:10.207924 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:28:10.209257 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:28:10.338431 tar[1445]: linux-amd64/LICENSE Jul 2 00:28:10.338515 tar[1445]: linux-amd64/README.md Jul 2 00:28:10.357437 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:28:10.422857 containerd[1454]: time="2024-07-02T00:28:10.422769160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:28:10.422857 containerd[1454]: time="2024-07-02T00:28:10.422835835Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:28:10.422857 containerd[1454]: time="2024-07-02T00:28:10.422853328Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:28:10.423046 containerd[1454]: time="2024-07-02T00:28:10.422896699Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:28:10.423046 containerd[1454]: time="2024-07-02T00:28:10.422916055Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:28:10.423046 containerd[1454]: time="2024-07-02T00:28:10.422931575Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:28:10.423046 containerd[1454]: time="2024-07-02T00:28:10.422961721Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:28:10.423194 containerd[1454]: time="2024-07-02T00:28:10.423158741Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:28:10.423194 containerd[1454]: time="2024-07-02T00:28:10.423189038Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:28:10.423240 containerd[1454]: time="2024-07-02T00:28:10.423205609Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:28:10.423240 containerd[1454]: time="2024-07-02T00:28:10.423219855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:28:10.423240 containerd[1454]: time="2024-07-02T00:28:10.423232720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:28:10.423306 containerd[1454]: time="2024-07-02T00:28:10.423249892Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:28:10.423306 containerd[1454]: time="2024-07-02T00:28:10.423276482Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:28:10.423306 containerd[1454]: time="2024-07-02T00:28:10.423293313Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:28:10.423373 containerd[1454]: time="2024-07-02T00:28:10.423309844Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:28:10.423373 containerd[1454]: time="2024-07-02T00:28:10.423328529Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:28:10.423373 containerd[1454]: time="2024-07-02T00:28:10.423343908Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:28:10.423373 containerd[1454]: time="2024-07-02T00:28:10.423360369Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:28:10.423562 containerd[1454]: time="2024-07-02T00:28:10.423533213Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:28:10.425630 containerd[1454]: time="2024-07-02T00:28:10.425602385Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:28:10.425699 containerd[1454]: time="2024-07-02T00:28:10.425639244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425699 containerd[1454]: time="2024-07-02T00:28:10.425658280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:28:10.425699 containerd[1454]: time="2024-07-02T00:28:10.425685651Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:28:10.425779 containerd[1454]: time="2024-07-02T00:28:10.425756875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425808 containerd[1454]: time="2024-07-02T00:28:10.425775991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425808 containerd[1454]: time="2024-07-02T00:28:10.425793313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425863 containerd[1454]: time="2024-07-02T00:28:10.425809564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425863 containerd[1454]: time="2024-07-02T00:28:10.425826766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425863 containerd[1454]: time="2024-07-02T00:28:10.425842626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425863 containerd[1454]: time="2024-07-02T00:28:10.425857955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425965 containerd[1454]: time="2024-07-02T00:28:10.425873043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.425965 containerd[1454]: time="2024-07-02T00:28:10.425891417Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:28:10.426111 containerd[1454]: time="2024-07-02T00:28:10.426083919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.426137 containerd[1454]: time="2024-07-02T00:28:10.426109717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.426137 containerd[1454]: time="2024-07-02T00:28:10.426126168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.426194 containerd[1454]: time="2024-07-02T00:28:10.426168097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.426194 containerd[1454]: time="2024-07-02T00:28:10.426186160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.426240 containerd[1454]: time="2024-07-02T00:28:10.426203593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.426240 containerd[1454]: time="2024-07-02T00:28:10.426223561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.426296 containerd[1454]: time="2024-07-02T00:28:10.426239030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:28:10.426668 containerd[1454]: time="2024-07-02T00:28:10.426580450Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:28:10.426668 containerd[1454]: time="2024-07-02T00:28:10.426656062Z" level=info msg="Connect containerd service" Jul 2 00:28:10.426858 containerd[1454]: time="2024-07-02T00:28:10.426689194Z" level=info msg="using legacy CRI server" Jul 2 00:28:10.426858 containerd[1454]: time="2024-07-02T00:28:10.426699814Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:28:10.426858 containerd[1454]: time="2024-07-02T00:28:10.426788470Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:28:10.427382 containerd[1454]: time="2024-07-02T00:28:10.427350525Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:28:10.427408 containerd[1454]: time="2024-07-02T00:28:10.427397493Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:28:10.427439 containerd[1454]: time="2024-07-02T00:28:10.427419434Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:28:10.427461 containerd[1454]: time="2024-07-02T00:28:10.427438600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:28:10.427461 containerd[1454]: time="2024-07-02T00:28:10.427455722Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:28:10.427595 containerd[1454]: time="2024-07-02T00:28:10.427531404Z" level=info msg="Start subscribing containerd event" Jul 2 00:28:10.427627 containerd[1454]: time="2024-07-02T00:28:10.427616995Z" level=info msg="Start recovering state" Jul 2 00:28:10.427788 containerd[1454]: time="2024-07-02T00:28:10.427760895Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:28:10.427821 containerd[1454]: time="2024-07-02T00:28:10.427768068Z" level=info msg="Start event monitor" Jul 2 00:28:10.427821 containerd[1454]: time="2024-07-02T00:28:10.427809155Z" level=info msg="Start snapshots syncer" Jul 2 00:28:10.427857 containerd[1454]: time="2024-07-02T00:28:10.427823152Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:28:10.427857 containerd[1454]: time="2024-07-02T00:28:10.427831908Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:28:10.427857 containerd[1454]: time="2024-07-02T00:28:10.427836477Z" level=info msg="Start streaming server" Jul 2 00:28:10.427937 containerd[1454]: time="2024-07-02T00:28:10.427916978Z" level=info msg="containerd successfully booted in 0.314877s" Jul 2 00:28:10.428110 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:28:10.851649 systemd-networkd[1370]: eth0: Gained IPv6LL Jul 2 00:28:10.854889 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:28:10.856955 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:28:10.866699 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:28:10.869236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:10.871444 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:28:10.892527 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:28:10.892818 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:28:10.894985 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:28:10.898045 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:28:11.480117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:11.481906 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:28:11.483294 systemd[1]: Startup finished in 963ms (kernel) + 6.836s (initrd) + 4.151s (userspace) = 11.950s. Jul 2 00:28:11.486143 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:28:11.972316 kubelet[1540]: E0702 00:28:11.972129 1540 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:28:11.977490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:28:11.977688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:28:14.346720 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:28:14.347925 systemd[1]: Started sshd@0-10.0.0.153:22-10.0.0.1:47722.service - OpenSSH per-connection server daemon (10.0.0.1:47722). Jul 2 00:28:14.396044 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 47722 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:28:14.397913 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:14.406098 systemd-logind[1436]: New session 1 of user core. Jul 2 00:28:14.407580 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:28:14.418710 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:28:14.430584 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:28:14.433174 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:28:14.441420 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:14.559703 systemd[1558]: Queued start job for default target default.target. Jul 2 00:28:14.570829 systemd[1558]: Created slice app.slice - User Application Slice. Jul 2 00:28:14.570854 systemd[1558]: Reached target paths.target - Paths. Jul 2 00:28:14.570869 systemd[1558]: Reached target timers.target - Timers. Jul 2 00:28:14.572631 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:28:14.585168 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:28:14.585294 systemd[1558]: Reached target sockets.target - Sockets. Jul 2 00:28:14.585312 systemd[1558]: Reached target basic.target - Basic System. Jul 2 00:28:14.585347 systemd[1558]: Reached target default.target - Main User Target. Jul 2 00:28:14.585380 systemd[1558]: Startup finished in 137ms. Jul 2 00:28:14.585974 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:28:14.587623 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:28:14.647954 systemd[1]: Started sshd@1-10.0.0.153:22-10.0.0.1:47734.service - OpenSSH per-connection server daemon (10.0.0.1:47734). Jul 2 00:28:14.692177 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 47734 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:28:14.693904 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:14.697763 systemd-logind[1436]: New session 2 of user core. Jul 2 00:28:14.707584 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:28:14.761521 sshd[1569]: pam_unix(sshd:session): session closed for user core Jul 2 00:28:14.779531 systemd[1]: sshd@1-10.0.0.153:22-10.0.0.1:47734.service: Deactivated successfully. Jul 2 00:28:14.781589 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:28:14.783020 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:28:14.784274 systemd[1]: Started sshd@2-10.0.0.153:22-10.0.0.1:47736.service - OpenSSH per-connection server daemon (10.0.0.1:47736). Jul 2 00:28:14.785052 systemd-logind[1436]: Removed session 2. Jul 2 00:28:14.820338 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 47736 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:28:14.821778 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:14.825746 systemd-logind[1436]: New session 3 of user core. Jul 2 00:28:14.843661 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:28:14.893739 sshd[1576]: pam_unix(sshd:session): session closed for user core Jul 2 00:28:14.911417 systemd[1]: sshd@2-10.0.0.153:22-10.0.0.1:47736.service: Deactivated successfully. Jul 2 00:28:14.913410 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:28:14.915251 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:28:14.925819 systemd[1]: Started sshd@3-10.0.0.153:22-10.0.0.1:47750.service - OpenSSH per-connection server daemon (10.0.0.1:47750). Jul 2 00:28:14.926857 systemd-logind[1436]: Removed session 3. Jul 2 00:28:14.958316 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 47750 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:28:14.959641 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:14.963368 systemd-logind[1436]: New session 4 of user core. Jul 2 00:28:14.973604 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:28:15.026719 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 2 00:28:15.034989 systemd[1]: sshd@3-10.0.0.153:22-10.0.0.1:47750.service: Deactivated successfully. Jul 2 00:28:15.036539 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:28:15.037899 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:28:15.039021 systemd[1]: Started sshd@4-10.0.0.153:22-10.0.0.1:47764.service - OpenSSH per-connection server daemon (10.0.0.1:47764). Jul 2 00:28:15.039795 systemd-logind[1436]: Removed session 4. Jul 2 00:28:15.075301 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 47764 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:28:15.076657 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:15.080839 systemd-logind[1436]: New session 5 of user core. Jul 2 00:28:15.094606 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:28:15.151400 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:28:15.151701 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:28:15.174212 sudo[1594]: pam_unix(sudo:session): session closed for user root Jul 2 00:28:15.175849 sshd[1591]: pam_unix(sshd:session): session closed for user core Jul 2 00:28:15.189215 systemd[1]: sshd@4-10.0.0.153:22-10.0.0.1:47764.service: Deactivated successfully. Jul 2 00:28:15.190761 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:28:15.192184 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:28:15.193608 systemd[1]: Started sshd@5-10.0.0.153:22-10.0.0.1:47770.service - OpenSSH per-connection server daemon (10.0.0.1:47770). Jul 2 00:28:15.194358 systemd-logind[1436]: Removed session 5. Jul 2 00:28:15.236784 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 47770 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:28:15.238127 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:15.241694 systemd-logind[1436]: New session 6 of user core. Jul 2 00:28:15.257588 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:28:15.310364 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:28:15.310691 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:28:15.313903 sudo[1603]: pam_unix(sudo:session): session closed for user root Jul 2 00:28:15.319322 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:28:15.319613 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:28:15.338676 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:28:15.340276 auditctl[1606]: No rules Jul 2 00:28:15.341456 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:28:15.341700 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:28:15.343500 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:28:15.373494 augenrules[1624]: No rules Jul 2 00:28:15.375767 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:28:15.377172 sudo[1602]: pam_unix(sudo:session): session closed for user root Jul 2 00:28:15.378934 sshd[1599]: pam_unix(sshd:session): session closed for user core Jul 2 00:28:15.389755 systemd[1]: sshd@5-10.0.0.153:22-10.0.0.1:47770.service: Deactivated successfully. Jul 2 00:28:15.392041 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:28:15.393942 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:28:15.400822 systemd[1]: Started sshd@6-10.0.0.153:22-10.0.0.1:47778.service - OpenSSH per-connection server daemon (10.0.0.1:47778). Jul 2 00:28:15.401818 systemd-logind[1436]: Removed session 6. Jul 2 00:28:15.433971 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 47778 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:28:15.435333 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:15.439173 systemd-logind[1436]: New session 7 of user core. Jul 2 00:28:15.454590 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:28:15.507506 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:28:15.507802 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:28:15.622687 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:28:15.623026 (dockerd)[1645]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:28:15.874438 dockerd[1645]: time="2024-07-02T00:28:15.874297135Z" level=info msg="Starting up" Jul 2 00:28:16.945663 dockerd[1645]: time="2024-07-02T00:28:16.945610446Z" level=info msg="Loading containers: start." Jul 2 00:28:17.357498 kernel: Initializing XFRM netlink socket Jul 2 00:28:17.444107 systemd-networkd[1370]: docker0: Link UP Jul 2 00:28:17.646152 dockerd[1645]: time="2024-07-02T00:28:17.646108600Z" level=info msg="Loading containers: done." Jul 2 00:28:17.791283 dockerd[1645]: time="2024-07-02T00:28:17.791142444Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:28:17.791491 dockerd[1645]: time="2024-07-02T00:28:17.791410717Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:28:17.791609 dockerd[1645]: time="2024-07-02T00:28:17.791575406Z" level=info msg="Daemon has completed initialization" Jul 2 00:28:17.891124 dockerd[1645]: time="2024-07-02T00:28:17.891049387Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:28:17.891315 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:28:18.605817 containerd[1454]: time="2024-07-02T00:28:18.605768270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:28:19.911286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971318099.mount: Deactivated successfully. Jul 2 00:28:21.465027 containerd[1454]: time="2024-07-02T00:28:21.464966240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:21.465891 containerd[1454]: time="2024-07-02T00:28:21.465858193Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jul 2 00:28:21.467407 containerd[1454]: time="2024-07-02T00:28:21.467348699Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:21.471067 containerd[1454]: time="2024-07-02T00:28:21.470992906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:21.472287 containerd[1454]: time="2024-07-02T00:28:21.472253171Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 2.866441739s" Jul 2 00:28:21.472287 containerd[1454]: time="2024-07-02T00:28:21.472285561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 00:28:21.497203 containerd[1454]: time="2024-07-02T00:28:21.497123048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:28:22.227972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:28:22.241647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:22.382944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:22.387523 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:28:22.483179 kubelet[1854]: E0702 00:28:22.483003 1854 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:28:22.490750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:28:22.490948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:28:26.299864 containerd[1454]: time="2024-07-02T00:28:26.299793441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:26.300765 containerd[1454]: time="2024-07-02T00:28:26.300697467Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jul 2 00:28:26.302398 containerd[1454]: time="2024-07-02T00:28:26.302342373Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:26.320972 containerd[1454]: time="2024-07-02T00:28:26.320864321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:26.322296 containerd[1454]: time="2024-07-02T00:28:26.322233470Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 4.825056741s" Jul 2 00:28:26.322342 containerd[1454]: time="2024-07-02T00:28:26.322303982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 00:28:26.348807 containerd[1454]: time="2024-07-02T00:28:26.348757962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:28:31.366660 containerd[1454]: time="2024-07-02T00:28:31.366580575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:31.367424 containerd[1454]: time="2024-07-02T00:28:31.367370036Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jul 2 00:28:31.369161 containerd[1454]: time="2024-07-02T00:28:31.369109009Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:31.372896 containerd[1454]: time="2024-07-02T00:28:31.372845218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:31.374048 containerd[1454]: time="2024-07-02T00:28:31.373996097Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 5.025186719s" Jul 2 00:28:31.374048 containerd[1454]: time="2024-07-02T00:28:31.374039709Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 00:28:31.402732 containerd[1454]: time="2024-07-02T00:28:31.402689167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:28:32.741437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:28:32.753846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:32.904909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:32.909631 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:28:32.993888 kubelet[1895]: E0702 00:28:32.993507 1895 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:28:32.998404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:28:32.998628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:28:33.152912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618306442.mount: Deactivated successfully. Jul 2 00:28:33.740980 containerd[1454]: time="2024-07-02T00:28:33.740930094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:33.741760 containerd[1454]: time="2024-07-02T00:28:33.741663059Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 00:28:33.743157 containerd[1454]: time="2024-07-02T00:28:33.743119382Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:33.745550 containerd[1454]: time="2024-07-02T00:28:33.745524394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:33.746304 containerd[1454]: time="2024-07-02T00:28:33.746252880Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.343517376s" Jul 2 00:28:33.746340 containerd[1454]: time="2024-07-02T00:28:33.746305870Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 00:28:33.787180 containerd[1454]: time="2024-07-02T00:28:33.787114500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:28:34.902416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425820737.mount: Deactivated successfully. Jul 2 00:28:34.909144 containerd[1454]: time="2024-07-02T00:28:34.909101197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:34.909973 containerd[1454]: time="2024-07-02T00:28:34.909922788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:28:34.911073 containerd[1454]: time="2024-07-02T00:28:34.911042318Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:34.913786 containerd[1454]: time="2024-07-02T00:28:34.913664247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:34.914506 containerd[1454]: time="2024-07-02T00:28:34.914442227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.127283925s" Jul 2 00:28:34.914506 containerd[1454]: time="2024-07-02T00:28:34.914498552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:28:34.936947 containerd[1454]: time="2024-07-02T00:28:34.936903646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:28:35.546690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322788277.mount: Deactivated successfully. Jul 2 00:28:38.255582 containerd[1454]: time="2024-07-02T00:28:38.255524185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:38.256429 containerd[1454]: time="2024-07-02T00:28:38.256377326Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:28:38.257628 containerd[1454]: time="2024-07-02T00:28:38.257586374Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:38.260989 containerd[1454]: time="2024-07-02T00:28:38.260945696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:38.262217 containerd[1454]: time="2024-07-02T00:28:38.262181645Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.325242452s" Jul 2 00:28:38.262275 containerd[1454]: time="2024-07-02T00:28:38.262217001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:28:38.284821 containerd[1454]: time="2024-07-02T00:28:38.284778308Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:28:39.858535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208114266.mount: Deactivated successfully. Jul 2 00:28:40.322418 containerd[1454]: time="2024-07-02T00:28:40.322262263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:40.323074 containerd[1454]: time="2024-07-02T00:28:40.323013292Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jul 2 00:28:40.324233 containerd[1454]: time="2024-07-02T00:28:40.324187435Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:40.326755 containerd[1454]: time="2024-07-02T00:28:40.326722271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:40.327558 containerd[1454]: time="2024-07-02T00:28:40.327522321Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 2.042705151s" Jul 2 00:28:40.327625 containerd[1454]: time="2024-07-02T00:28:40.327558780Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 00:28:42.877735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:42.891746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:42.910606 systemd[1]: Reloading requested from client PID 2064 ('systemctl') (unit session-7.scope)... Jul 2 00:28:42.910624 systemd[1]: Reloading... Jul 2 00:28:43.003624 zram_generator::config[2107]: No configuration found. Jul 2 00:28:43.761711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:28:43.837318 systemd[1]: Reloading finished in 926 ms. Jul 2 00:28:43.884303 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:28:43.884414 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:28:43.884713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:43.887461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:44.041654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:44.047838 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:28:44.095888 kubelet[2150]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:28:44.095888 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:28:44.095888 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:28:44.096285 kubelet[2150]: I0702 00:28:44.095918 2150 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:28:44.327084 kubelet[2150]: I0702 00:28:44.326984 2150 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:28:44.327084 kubelet[2150]: I0702 00:28:44.327012 2150 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:28:44.327245 kubelet[2150]: I0702 00:28:44.327229 2150 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:28:44.341734 kubelet[2150]: E0702 00:28:44.341711 2150 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.341855 kubelet[2150]: I0702 00:28:44.341840 2150 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:28:44.353102 kubelet[2150]: I0702 00:28:44.353071 2150 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:28:44.353363 kubelet[2150]: I0702 00:28:44.353338 2150 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:28:44.353560 kubelet[2150]: I0702 00:28:44.353533 2150 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:28:44.353997 kubelet[2150]: I0702 00:28:44.353973 2150 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:28:44.353997 kubelet[2150]: I0702 00:28:44.353988 2150 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:28:44.354608 kubelet[2150]: I0702 00:28:44.354558 2150 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:28:44.355716 kubelet[2150]: I0702 00:28:44.355697 2150 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:28:44.355716 kubelet[2150]: I0702 00:28:44.355713 2150 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:28:44.355768 kubelet[2150]: I0702 00:28:44.355739 2150 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:28:44.355768 kubelet[2150]: I0702 00:28:44.355754 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:28:44.357460 kubelet[2150]: W0702 00:28:44.357380 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.357460 kubelet[2150]: E0702 00:28:44.357433 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.357576 kubelet[2150]: W0702 00:28:44.357509 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.357576 kubelet[2150]: E0702 00:28:44.357542 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.359106 kubelet[2150]: I0702 00:28:44.357818 2150 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:28:44.360172 kubelet[2150]: W0702 00:28:44.360145 2150 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:28:44.361011 kubelet[2150]: I0702 00:28:44.360844 2150 server.go:1232] "Started kubelet" Jul 2 00:28:44.361011 kubelet[2150]: I0702 00:28:44.360914 2150 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:28:44.361092 kubelet[2150]: I0702 00:28:44.361029 2150 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:28:44.361970 kubelet[2150]: I0702 00:28:44.361307 2150 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:28:44.361970 kubelet[2150]: I0702 00:28:44.361644 2150 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:28:44.362966 kubelet[2150]: E0702 00:28:44.362438 2150 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:28:44.362966 kubelet[2150]: I0702 00:28:44.362451 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:28:44.362966 kubelet[2150]: E0702 00:28:44.362458 2150 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:28:44.364440 kubelet[2150]: E0702 00:28:44.363383 2150 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3ddc5cc818e6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 28, 44, 360825062, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 28, 44, 360825062, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.153:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.153:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:28:44.364440 kubelet[2150]: I0702 00:28:44.363689 2150 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:28:44.364440 kubelet[2150]: I0702 00:28:44.363759 2150 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:28:44.364440 kubelet[2150]: I0702 00:28:44.363827 2150 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:28:44.364440 kubelet[2150]: W0702 00:28:44.364086 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.364717 kubelet[2150]: E0702 00:28:44.364128 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.364717 kubelet[2150]: E0702 00:28:44.364345 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="200ms" Jul 2 00:28:44.382990 kubelet[2150]: I0702 00:28:44.382963 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:28:44.384305 kubelet[2150]: I0702 00:28:44.384273 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:28:44.384305 kubelet[2150]: I0702 00:28:44.384296 2150 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:28:44.384503 kubelet[2150]: I0702 00:28:44.384314 2150 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:28:44.384760 kubelet[2150]: W0702 00:28:44.384722 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.384800 kubelet[2150]: E0702 00:28:44.384767 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:44.385651 kubelet[2150]: E0702 00:28:44.385629 2150 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:28:44.485958 kubelet[2150]: E0702 00:28:44.485889 2150 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:28:44.551845 kubelet[2150]: I0702 00:28:44.551814 2150 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:28:44.552185 kubelet[2150]: E0702 00:28:44.552165 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Jul 2 00:28:44.552247 kubelet[2150]: I0702 00:28:44.552204 2150 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:28:44.552247 kubelet[2150]: I0702 00:28:44.552215 2150 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:28:44.552247 kubelet[2150]: I0702 00:28:44.552229 2150 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:28:44.564692 kubelet[2150]: E0702 00:28:44.564670 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="400ms" Jul 2 00:28:44.686135 kubelet[2150]: E0702 00:28:44.686035 2150 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:28:44.754373 kubelet[2150]: I0702 00:28:44.754334 2150 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:28:44.754691 kubelet[2150]: E0702 00:28:44.754669 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Jul 2 00:28:44.965063 kubelet[2150]: E0702 00:28:44.965043 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="800ms" Jul 2 00:28:45.086778 kubelet[2150]: E0702 00:28:45.086736 2150 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:28:45.156031 kubelet[2150]: I0702 00:28:45.156014 2150 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:28:45.156410 kubelet[2150]: E0702 00:28:45.156300 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Jul 2 00:28:45.259992 kubelet[2150]: I0702 00:28:45.259858 2150 policy_none.go:49] "None policy: Start" Jul 2 00:28:45.260637 kubelet[2150]: I0702 00:28:45.260618 2150 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:28:45.260673 kubelet[2150]: I0702 00:28:45.260643 2150 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:28:45.392524 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:28:45.406435 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:28:45.409522 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:28:45.418143 kubelet[2150]: I0702 00:28:45.416442 2150 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:28:45.418143 kubelet[2150]: I0702 00:28:45.416742 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:28:45.418677 kubelet[2150]: E0702 00:28:45.418655 2150 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:28:45.461239 kubelet[2150]: W0702 00:28:45.461163 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:45.461239 kubelet[2150]: E0702 00:28:45.461223 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:45.656655 kubelet[2150]: W0702 00:28:45.656509 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:45.656655 kubelet[2150]: E0702 00:28:45.656570 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:45.659794 kubelet[2150]: W0702 00:28:45.659749 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:45.659794 kubelet[2150]: E0702 00:28:45.659784 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:45.765706 kubelet[2150]: E0702 00:28:45.765655 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="1.6s" Jul 2 00:28:45.887824 kubelet[2150]: I0702 00:28:45.887764 2150 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:28:45.888914 kubelet[2150]: I0702 00:28:45.888884 2150 topology_manager.go:215] "Topology Admit Handler" podUID="304739c463dc1324160e952bda20ca91" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:28:45.889527 kubelet[2150]: I0702 00:28:45.889507 2150 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:28:45.895248 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jul 2 00:28:45.908538 systemd[1]: Created slice kubepods-burstable-pod304739c463dc1324160e952bda20ca91.slice - libcontainer container kubepods-burstable-pod304739c463dc1324160e952bda20ca91.slice. Jul 2 00:28:45.909303 kubelet[2150]: W0702 00:28:45.909278 2150 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod304739c463dc1324160e952bda20ca91.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod304739c463dc1324160e952bda20ca91.slice/cpuset.cpus.effective: no such device Jul 2 00:28:45.920393 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jul 2 00:28:45.957453 kubelet[2150]: W0702 00:28:45.957405 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:45.957453 kubelet[2150]: E0702 00:28:45.957457 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:45.958099 kubelet[2150]: I0702 00:28:45.958084 2150 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:28:45.958401 kubelet[2150]: E0702 00:28:45.958375 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Jul 2 00:28:45.971799 kubelet[2150]: I0702 00:28:45.971773 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:28:45.971862 kubelet[2150]: I0702 00:28:45.971814 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/304739c463dc1324160e952bda20ca91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"304739c463dc1324160e952bda20ca91\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:45.971862 kubelet[2150]: I0702 00:28:45.971837 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:45.971862 kubelet[2150]: I0702 00:28:45.971856 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:45.971953 kubelet[2150]: I0702 00:28:45.971879 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:45.971953 kubelet[2150]: I0702 00:28:45.971900 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/304739c463dc1324160e952bda20ca91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"304739c463dc1324160e952bda20ca91\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:45.971953 kubelet[2150]: I0702 00:28:45.971935 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/304739c463dc1324160e952bda20ca91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"304739c463dc1324160e952bda20ca91\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:45.972015 kubelet[2150]: I0702 00:28:45.971979 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:45.972015 kubelet[2150]: I0702 00:28:45.972008 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:46.207013 kubelet[2150]: E0702 00:28:46.206979 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:46.207711 containerd[1454]: time="2024-07-02T00:28:46.207670507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:46.217981 kubelet[2150]: E0702 00:28:46.217954 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:46.218414 containerd[1454]: time="2024-07-02T00:28:46.218380747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:304739c463dc1324160e952bda20ca91,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:46.222693 kubelet[2150]: E0702 00:28:46.222679 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:46.222986 containerd[1454]: time="2024-07-02T00:28:46.222957597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:46.536327 kubelet[2150]: E0702 00:28:46.536237 2150 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:47.366373 kubelet[2150]: E0702 00:28:47.366326 2150 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.153:6443: connect: connection refused" interval="3.2s" Jul 2 00:28:47.560541 kubelet[2150]: I0702 00:28:47.560494 2150 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:28:47.560964 kubelet[2150]: E0702 00:28:47.560927 2150 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.153:6443/api/v1/nodes\": dial tcp 10.0.0.153:6443: connect: connection refused" node="localhost" Jul 2 00:28:47.908736 kubelet[2150]: E0702 00:28:47.908611 2150 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3ddc5cc818e6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 28, 44, 360825062, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 28, 44, 360825062, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.153:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.153:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:28:48.260816 kubelet[2150]: W0702 00:28:48.260756 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:48.260816 kubelet[2150]: E0702 00:28:48.260820 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:48.344102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192503075.mount: Deactivated successfully. Jul 2 00:28:48.406987 containerd[1454]: time="2024-07-02T00:28:48.406912678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:48.415052 containerd[1454]: time="2024-07-02T00:28:48.414983316Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:48.419021 kubelet[2150]: W0702 00:28:48.418963 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:48.419021 kubelet[2150]: E0702 00:28:48.419020 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:48.421442 containerd[1454]: time="2024-07-02T00:28:48.421363333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:28:48.431218 containerd[1454]: time="2024-07-02T00:28:48.431155761Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:48.450290 containerd[1454]: time="2024-07-02T00:28:48.450202002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:28:48.468207 containerd[1454]: time="2024-07-02T00:28:48.468122216Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:48.479988 containerd[1454]: time="2024-07-02T00:28:48.479886732Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:28:48.501662 containerd[1454]: time="2024-07-02T00:28:48.501577031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:48.502782 containerd[1454]: time="2024-07-02T00:28:48.502716171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.284256233s" Jul 2 00:28:48.503798 containerd[1454]: time="2024-07-02T00:28:48.503762746Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.295981586s" Jul 2 00:28:48.516109 kubelet[2150]: W0702 00:28:48.515975 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:48.516109 kubelet[2150]: E0702 00:28:48.516021 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:48.540980 containerd[1454]: time="2024-07-02T00:28:48.540922629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.317893035s" Jul 2 00:28:48.658102 kubelet[2150]: W0702 00:28:48.658051 2150 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:48.658102 kubelet[2150]: E0702 00:28:48.658101 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.153:6443: connect: connection refused Jul 2 00:28:48.911468 containerd[1454]: time="2024-07-02T00:28:48.910795119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:48.911468 containerd[1454]: time="2024-07-02T00:28:48.910935146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:48.911468 containerd[1454]: time="2024-07-02T00:28:48.910956486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:48.911468 containerd[1454]: time="2024-07-02T00:28:48.910968379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:48.914743 containerd[1454]: time="2024-07-02T00:28:48.914576834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:48.914743 containerd[1454]: time="2024-07-02T00:28:48.914652007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:48.914743 containerd[1454]: time="2024-07-02T00:28:48.914676915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:48.914743 containerd[1454]: time="2024-07-02T00:28:48.914698957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:48.916758 containerd[1454]: time="2024-07-02T00:28:48.916525678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:48.916758 containerd[1454]: time="2024-07-02T00:28:48.916709678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:48.916758 containerd[1454]: time="2024-07-02T00:28:48.916732121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:48.916758 containerd[1454]: time="2024-07-02T00:28:48.916747670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:48.957791 systemd[1]: Started cri-containerd-21f4c991999c6649b1301d4482427378f6b44b16e65b6772c690258ae763ad51.scope - libcontainer container 21f4c991999c6649b1301d4482427378f6b44b16e65b6772c690258ae763ad51. Jul 2 00:28:48.960138 systemd[1]: Started cri-containerd-52d7a3019b42e2ac5bb6ec06b1e3cbb5dfc09b558a77d395a11c650609f68388.scope - libcontainer container 52d7a3019b42e2ac5bb6ec06b1e3cbb5dfc09b558a77d395a11c650609f68388. Jul 2 00:28:48.962507 systemd[1]: Started cri-containerd-8fc3a674eaed1a7fdd7706ba2926cdee403d663d1d4a8e4ba6698528d1b9d09a.scope - libcontainer container 8fc3a674eaed1a7fdd7706ba2926cdee403d663d1d4a8e4ba6698528d1b9d09a. Jul 2 00:28:49.013600 containerd[1454]: time="2024-07-02T00:28:49.013434384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:304739c463dc1324160e952bda20ca91,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f4c991999c6649b1301d4482427378f6b44b16e65b6772c690258ae763ad51\"" Jul 2 00:28:49.016216 kubelet[2150]: E0702 00:28:49.016177 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:49.020287 containerd[1454]: time="2024-07-02T00:28:49.020250137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc3a674eaed1a7fdd7706ba2926cdee403d663d1d4a8e4ba6698528d1b9d09a\"" Jul 2 00:28:49.021418 containerd[1454]: time="2024-07-02T00:28:49.020343626Z" level=info msg="CreateContainer within sandbox \"21f4c991999c6649b1301d4482427378f6b44b16e65b6772c690258ae763ad51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:28:49.021633 kubelet[2150]: E0702 00:28:49.020803 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:49.023966 containerd[1454]: time="2024-07-02T00:28:49.023843057Z" level=info msg="CreateContainer within sandbox \"8fc3a674eaed1a7fdd7706ba2926cdee403d663d1d4a8e4ba6698528d1b9d09a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:28:49.027441 containerd[1454]: time="2024-07-02T00:28:49.027366474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"52d7a3019b42e2ac5bb6ec06b1e3cbb5dfc09b558a77d395a11c650609f68388\"" Jul 2 00:28:49.028240 kubelet[2150]: E0702 00:28:49.028223 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:49.029819 containerd[1454]: time="2024-07-02T00:28:49.029778255Z" level=info msg="CreateContainer within sandbox \"52d7a3019b42e2ac5bb6ec06b1e3cbb5dfc09b558a77d395a11c650609f68388\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:28:49.054192 containerd[1454]: time="2024-07-02T00:28:49.054131673Z" level=info msg="CreateContainer within sandbox \"8fc3a674eaed1a7fdd7706ba2926cdee403d663d1d4a8e4ba6698528d1b9d09a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c16513bedc10d11da41869820e5e8fc4d5618f2d38048c3098879b99d3b3bcce\"" Jul 2 00:28:49.054921 containerd[1454]: time="2024-07-02T00:28:49.054895387Z" level=info msg="StartContainer for \"c16513bedc10d11da41869820e5e8fc4d5618f2d38048c3098879b99d3b3bcce\"" Jul 2 00:28:49.055605 containerd[1454]: time="2024-07-02T00:28:49.055537649Z" level=info msg="CreateContainer within sandbox \"52d7a3019b42e2ac5bb6ec06b1e3cbb5dfc09b558a77d395a11c650609f68388\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9c18ad782f1abb8e2e3bbb07b69efd97f526e964aee48bf45a518c4e80d20246\"" Jul 2 00:28:49.055841 containerd[1454]: time="2024-07-02T00:28:49.055822311Z" level=info msg="StartContainer for \"9c18ad782f1abb8e2e3bbb07b69efd97f526e964aee48bf45a518c4e80d20246\"" Jul 2 00:28:49.057197 containerd[1454]: time="2024-07-02T00:28:49.057162683Z" level=info msg="CreateContainer within sandbox \"21f4c991999c6649b1301d4482427378f6b44b16e65b6772c690258ae763ad51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68005bae431e33915c3caf5706cba1eeffc0a5007ce11709ee252c6117770473\"" Jul 2 00:28:49.057578 containerd[1454]: time="2024-07-02T00:28:49.057557625Z" level=info msg="StartContainer for \"68005bae431e33915c3caf5706cba1eeffc0a5007ce11709ee252c6117770473\"" Jul 2 00:28:49.088767 systemd[1]: Started cri-containerd-9c18ad782f1abb8e2e3bbb07b69efd97f526e964aee48bf45a518c4e80d20246.scope - libcontainer container 9c18ad782f1abb8e2e3bbb07b69efd97f526e964aee48bf45a518c4e80d20246. Jul 2 00:28:49.094360 systemd[1]: Started cri-containerd-68005bae431e33915c3caf5706cba1eeffc0a5007ce11709ee252c6117770473.scope - libcontainer container 68005bae431e33915c3caf5706cba1eeffc0a5007ce11709ee252c6117770473. Jul 2 00:28:49.096373 systemd[1]: Started cri-containerd-c16513bedc10d11da41869820e5e8fc4d5618f2d38048c3098879b99d3b3bcce.scope - libcontainer container c16513bedc10d11da41869820e5e8fc4d5618f2d38048c3098879b99d3b3bcce. Jul 2 00:28:49.137054 containerd[1454]: time="2024-07-02T00:28:49.136916078Z" level=info msg="StartContainer for \"9c18ad782f1abb8e2e3bbb07b69efd97f526e964aee48bf45a518c4e80d20246\" returns successfully" Jul 2 00:28:49.143899 containerd[1454]: time="2024-07-02T00:28:49.143846981Z" level=info msg="StartContainer for \"c16513bedc10d11da41869820e5e8fc4d5618f2d38048c3098879b99d3b3bcce\" returns successfully" Jul 2 00:28:49.150080 containerd[1454]: time="2024-07-02T00:28:49.150027466Z" level=info msg="StartContainer for \"68005bae431e33915c3caf5706cba1eeffc0a5007ce11709ee252c6117770473\" returns successfully" Jul 2 00:28:49.397906 kubelet[2150]: E0702 00:28:49.397869 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:49.404432 kubelet[2150]: E0702 00:28:49.403345 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:49.404432 kubelet[2150]: E0702 00:28:49.404359 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:50.412671 kubelet[2150]: E0702 00:28:50.410700 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:50.413798 kubelet[2150]: E0702 00:28:50.413776 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:50.663325 kubelet[2150]: E0702 00:28:50.663190 2150 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:28:50.762537 kubelet[2150]: I0702 00:28:50.762504 2150 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:28:50.770035 kubelet[2150]: I0702 00:28:50.769998 2150 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:28:50.779165 kubelet[2150]: E0702 00:28:50.778988 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:50.879684 kubelet[2150]: E0702 00:28:50.879641 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:50.980172 kubelet[2150]: E0702 00:28:50.980152 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.080980 kubelet[2150]: E0702 00:28:51.080940 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.181434 kubelet[2150]: E0702 00:28:51.181392 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.282159 kubelet[2150]: E0702 00:28:51.282037 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.383054 kubelet[2150]: E0702 00:28:51.382996 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.406661 kubelet[2150]: E0702 00:28:51.406629 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:51.483483 kubelet[2150]: E0702 00:28:51.483430 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.584193 kubelet[2150]: E0702 00:28:51.584059 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.684788 kubelet[2150]: E0702 00:28:51.684735 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.785438 kubelet[2150]: E0702 00:28:51.785359 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.886196 kubelet[2150]: E0702 00:28:51.886076 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:51.986790 kubelet[2150]: E0702 00:28:51.986735 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:52.087366 kubelet[2150]: E0702 00:28:52.087322 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:52.188065 kubelet[2150]: E0702 00:28:52.187936 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:52.288727 kubelet[2150]: E0702 00:28:52.288674 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:52.388822 kubelet[2150]: E0702 00:28:52.388771 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:52.489577 kubelet[2150]: E0702 00:28:52.489528 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:52.590368 kubelet[2150]: E0702 00:28:52.590324 2150 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:53.290449 kubelet[2150]: E0702 00:28:53.290413 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:53.361608 kubelet[2150]: I0702 00:28:53.361547 2150 apiserver.go:52] "Watching apiserver" Jul 2 00:28:53.364333 kubelet[2150]: I0702 00:28:53.364307 2150 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:28:53.408936 kubelet[2150]: E0702 00:28:53.408906 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:53.632802 systemd[1]: Reloading requested from client PID 2430 ('systemctl') (unit session-7.scope)... Jul 2 00:28:53.632818 systemd[1]: Reloading... Jul 2 00:28:53.720516 zram_generator::config[2470]: No configuration found. Jul 2 00:28:53.834937 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:28:53.924811 systemd[1]: Reloading finished in 291 ms. Jul 2 00:28:53.969335 kubelet[2150]: I0702 00:28:53.969274 2150 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:28:53.969376 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:53.980874 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:28:53.981181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:53.990096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:54.124987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:54.131137 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:28:54.178537 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:28:54.178537 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:28:54.178537 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:28:54.178949 kubelet[2512]: I0702 00:28:54.178509 2512 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:28:54.183332 kubelet[2512]: I0702 00:28:54.183213 2512 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:28:54.183332 kubelet[2512]: I0702 00:28:54.183240 2512 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:28:54.183541 kubelet[2512]: I0702 00:28:54.183456 2512 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:28:54.184821 kubelet[2512]: I0702 00:28:54.184804 2512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:28:54.185688 kubelet[2512]: I0702 00:28:54.185664 2512 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:28:54.194220 kubelet[2512]: I0702 00:28:54.194199 2512 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:28:54.194438 kubelet[2512]: I0702 00:28:54.194416 2512 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:28:54.194624 kubelet[2512]: I0702 00:28:54.194587 2512 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:28:54.194698 kubelet[2512]: I0702 00:28:54.194641 2512 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:28:54.194698 kubelet[2512]: I0702 00:28:54.194652 2512 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:28:54.194698 kubelet[2512]: I0702 00:28:54.194692 2512 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:28:54.194802 kubelet[2512]: I0702 00:28:54.194783 2512 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:28:54.194802 kubelet[2512]: I0702 00:28:54.194799 2512 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:28:54.194868 kubelet[2512]: I0702 00:28:54.194823 2512 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:28:54.194868 kubelet[2512]: I0702 00:28:54.194839 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:28:54.195525 kubelet[2512]: I0702 00:28:54.195446 2512 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:28:54.197908 kubelet[2512]: I0702 00:28:54.197880 2512 server.go:1232] "Started kubelet" Jul 2 00:28:54.199927 kubelet[2512]: I0702 00:28:54.199789 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:28:54.203844 kubelet[2512]: E0702 00:28:54.200589 2512 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:28:54.203844 kubelet[2512]: E0702 00:28:54.200632 2512 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:28:54.203844 kubelet[2512]: I0702 00:28:54.201811 2512 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:28:54.203844 kubelet[2512]: I0702 00:28:54.201923 2512 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:28:54.203844 kubelet[2512]: I0702 00:28:54.202089 2512 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:28:54.203844 kubelet[2512]: I0702 00:28:54.202723 2512 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:28:54.204381 kubelet[2512]: I0702 00:28:54.204365 2512 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:28:54.205822 kubelet[2512]: I0702 00:28:54.205805 2512 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:28:54.206119 kubelet[2512]: I0702 00:28:54.206104 2512 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:28:54.217743 kubelet[2512]: I0702 00:28:54.217699 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:28:54.220269 kubelet[2512]: I0702 00:28:54.220028 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:28:54.220269 kubelet[2512]: I0702 00:28:54.220052 2512 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:28:54.220269 kubelet[2512]: I0702 00:28:54.220074 2512 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:28:54.220269 kubelet[2512]: E0702 00:28:54.220128 2512 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:28:54.258596 kubelet[2512]: I0702 00:28:54.258565 2512 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:28:54.258734 kubelet[2512]: I0702 00:28:54.258628 2512 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:28:54.258734 kubelet[2512]: I0702 00:28:54.258646 2512 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:28:54.258826 kubelet[2512]: I0702 00:28:54.258784 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:28:54.258826 kubelet[2512]: I0702 00:28:54.258809 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:28:54.258826 kubelet[2512]: I0702 00:28:54.258816 2512 policy_none.go:49] "None policy: Start" Jul 2 00:28:54.259516 kubelet[2512]: I0702 00:28:54.259493 2512 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:28:54.259516 kubelet[2512]: I0702 00:28:54.259515 2512 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:28:54.259653 kubelet[2512]: I0702 00:28:54.259636 2512 state_mem.go:75] "Updated machine memory state" Jul 2 00:28:54.263609 kubelet[2512]: I0702 00:28:54.263541 2512 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:28:54.264119 kubelet[2512]: I0702 00:28:54.263944 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:28:54.321295 kubelet[2512]: I0702 00:28:54.321255 2512 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:28:54.321492 kubelet[2512]: I0702 00:28:54.321358 2512 topology_manager.go:215] "Topology Admit Handler" podUID="304739c463dc1324160e952bda20ca91" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:28:54.321492 kubelet[2512]: I0702 00:28:54.321396 2512 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:28:54.347752 kubelet[2512]: E0702 00:28:54.347710 2512 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:54.370078 kubelet[2512]: I0702 00:28:54.370054 2512 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:28:54.390503 kubelet[2512]: I0702 00:28:54.381205 2512 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 00:28:54.390503 kubelet[2512]: I0702 00:28:54.381437 2512 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:28:54.395925 sudo[2547]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:28:54.396559 sudo[2547]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:28:54.402872 kubelet[2512]: I0702 00:28:54.402838 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:28:54.402949 kubelet[2512]: I0702 00:28:54.402927 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:54.402949 kubelet[2512]: I0702 00:28:54.402948 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:54.404547 kubelet[2512]: I0702 00:28:54.404525 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:54.404589 kubelet[2512]: I0702 00:28:54.404554 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/304739c463dc1324160e952bda20ca91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"304739c463dc1324160e952bda20ca91\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:54.404626 kubelet[2512]: I0702 00:28:54.404605 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/304739c463dc1324160e952bda20ca91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"304739c463dc1324160e952bda20ca91\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:54.404651 kubelet[2512]: I0702 00:28:54.404631 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/304739c463dc1324160e952bda20ca91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"304739c463dc1324160e952bda20ca91\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:54.404682 kubelet[2512]: I0702 00:28:54.404673 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:54.404707 kubelet[2512]: I0702 00:28:54.404690 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:54.626192 kubelet[2512]: E0702 00:28:54.626158 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:54.632450 kubelet[2512]: E0702 00:28:54.632400 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:54.649200 kubelet[2512]: E0702 00:28:54.649155 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:54.892560 sudo[2547]: pam_unix(sudo:session): session closed for user root Jul 2 00:28:55.196073 kubelet[2512]: I0702 00:28:55.196023 2512 apiserver.go:52] "Watching apiserver" Jul 2 00:28:55.202821 kubelet[2512]: I0702 00:28:55.202789 2512 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:28:55.232148 kubelet[2512]: E0702 00:28:55.232107 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:55.232148 kubelet[2512]: E0702 00:28:55.232126 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:55.237437 kubelet[2512]: E0702 00:28:55.237420 2512 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:55.237838 kubelet[2512]: E0702 00:28:55.237809 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:55.248148 kubelet[2512]: I0702 00:28:55.248114 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.248064122 podCreationTimestamp="2024-07-02 00:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:28:55.247906072 +0000 UTC m=+1.112375639" watchObservedRunningTime="2024-07-02 00:28:55.248064122 +0000 UTC m=+1.112533679" Jul 2 00:28:55.258328 kubelet[2512]: I0702 00:28:55.258301 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.258272173 podCreationTimestamp="2024-07-02 00:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:28:55.25352583 +0000 UTC m=+1.117995407" watchObservedRunningTime="2024-07-02 00:28:55.258272173 +0000 UTC m=+1.122741730" Jul 2 00:28:55.263923 kubelet[2512]: I0702 00:28:55.263885 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.263853068 podCreationTimestamp="2024-07-02 00:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:28:55.258430463 +0000 UTC m=+1.122900020" watchObservedRunningTime="2024-07-02 00:28:55.263853068 +0000 UTC m=+1.128322625" Jul 2 00:28:55.533646 update_engine[1438]: I0702 00:28:55.533504 1438 update_attempter.cc:509] Updating boot flags... Jul 2 00:28:55.560519 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2565) Jul 2 00:28:55.601509 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2566) Jul 2 00:28:55.637551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2566) Jul 2 00:28:56.000077 sudo[1635]: pam_unix(sudo:session): session closed for user root Jul 2 00:28:56.001913 sshd[1632]: pam_unix(sshd:session): session closed for user core Jul 2 00:28:56.005694 systemd[1]: sshd@6-10.0.0.153:22-10.0.0.1:47778.service: Deactivated successfully. Jul 2 00:28:56.007639 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:28:56.007849 systemd[1]: session-7.scope: Consumed 4.557s CPU time, 142.8M memory peak, 0B memory swap peak. Jul 2 00:28:56.008228 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:28:56.008993 systemd-logind[1436]: Removed session 7. Jul 2 00:28:56.233105 kubelet[2512]: E0702 00:28:56.233069 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:56.233555 kubelet[2512]: E0702 00:28:56.233138 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:59.028850 kubelet[2512]: E0702 00:28:59.028814 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:59.237053 kubelet[2512]: E0702 00:28:59.237017 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:03.174006 kubelet[2512]: E0702 00:29:03.173963 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:03.242007 kubelet[2512]: E0702 00:29:03.241967 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:05.785223 kubelet[2512]: E0702 00:29:05.785193 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:06.242680 kubelet[2512]: I0702 00:29:06.242644 2512 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:29:06.243100 containerd[1454]: time="2024-07-02T00:29:06.243060087Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:29:06.243514 kubelet[2512]: I0702 00:29:06.243322 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:29:07.072660 kubelet[2512]: I0702 00:29:07.071849 2512 topology_manager.go:215] "Topology Admit Handler" podUID="043efc9c-4261-4284-a5d9-f9419c8dc109" podNamespace="kube-system" podName="kube-proxy-s2tb7" Jul 2 00:29:07.088184 systemd[1]: Created slice kubepods-besteffort-pod043efc9c_4261_4284_a5d9_f9419c8dc109.slice - libcontainer container kubepods-besteffort-pod043efc9c_4261_4284_a5d9_f9419c8dc109.slice. Jul 2 00:29:07.089507 kubelet[2512]: I0702 00:29:07.088927 2512 topology_manager.go:215] "Topology Admit Handler" podUID="306175a9-b679-4494-a474-96d766f9c018" podNamespace="kube-system" podName="cilium-d7qzq" Jul 2 00:29:07.107720 systemd[1]: Created slice kubepods-burstable-pod306175a9_b679_4494_a474_96d766f9c018.slice - libcontainer container kubepods-burstable-pod306175a9_b679_4494_a474_96d766f9c018.slice. Jul 2 00:29:07.176388 kubelet[2512]: I0702 00:29:07.176354 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/043efc9c-4261-4284-a5d9-f9419c8dc109-xtables-lock\") pod \"kube-proxy-s2tb7\" (UID: \"043efc9c-4261-4284-a5d9-f9419c8dc109\") " pod="kube-system/kube-proxy-s2tb7" Jul 2 00:29:07.176388 kubelet[2512]: I0702 00:29:07.176396 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mv5f\" (UniqueName: \"kubernetes.io/projected/043efc9c-4261-4284-a5d9-f9419c8dc109-kube-api-access-6mv5f\") pod \"kube-proxy-s2tb7\" (UID: \"043efc9c-4261-4284-a5d9-f9419c8dc109\") " pod="kube-system/kube-proxy-s2tb7" Jul 2 00:29:07.176617 kubelet[2512]: I0702 00:29:07.176423 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/043efc9c-4261-4284-a5d9-f9419c8dc109-lib-modules\") pod \"kube-proxy-s2tb7\" (UID: \"043efc9c-4261-4284-a5d9-f9419c8dc109\") " pod="kube-system/kube-proxy-s2tb7" Jul 2 00:29:07.176617 kubelet[2512]: I0702 00:29:07.176445 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/043efc9c-4261-4284-a5d9-f9419c8dc109-kube-proxy\") pod \"kube-proxy-s2tb7\" (UID: \"043efc9c-4261-4284-a5d9-f9419c8dc109\") " pod="kube-system/kube-proxy-s2tb7" Jul 2 00:29:07.264298 kubelet[2512]: I0702 00:29:07.263872 2512 topology_manager.go:215] "Topology Admit Handler" podUID="ce9a8c83-d186-4579-b3f7-034bbcbbe538" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-rwlf2" Jul 2 00:29:07.274459 systemd[1]: Created slice kubepods-besteffort-podce9a8c83_d186_4579_b3f7_034bbcbbe538.slice - libcontainer container kubepods-besteffort-podce9a8c83_d186_4579_b3f7_034bbcbbe538.slice. Jul 2 00:29:07.277321 kubelet[2512]: I0702 00:29:07.277295 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cilium-run\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.278620 kubelet[2512]: I0702 00:29:07.278578 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-host-proc-sys-net\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.279063 kubelet[2512]: I0702 00:29:07.278805 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-lib-modules\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.279063 kubelet[2512]: I0702 00:29:07.278841 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce9a8c83-d186-4579-b3f7-034bbcbbe538-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-rwlf2\" (UID: \"ce9a8c83-d186-4579-b3f7-034bbcbbe538\") " pod="kube-system/cilium-operator-6bc8ccdb58-rwlf2" Jul 2 00:29:07.279063 kubelet[2512]: I0702 00:29:07.278867 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cilium-cgroup\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.279063 kubelet[2512]: I0702 00:29:07.278891 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/306175a9-b679-4494-a474-96d766f9c018-clustermesh-secrets\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.279063 kubelet[2512]: I0702 00:29:07.278917 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/306175a9-b679-4494-a474-96d766f9c018-hubble-tls\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.279256 kubelet[2512]: I0702 00:29:07.278957 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cni-path\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.279256 kubelet[2512]: I0702 00:29:07.278980 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-etc-cni-netd\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.280564 kubelet[2512]: I0702 00:29:07.280309 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-xtables-lock\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.280564 kubelet[2512]: I0702 00:29:07.280352 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/306175a9-b679-4494-a474-96d766f9c018-cilium-config-path\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.280564 kubelet[2512]: I0702 00:29:07.280378 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn64s\" (UniqueName: \"kubernetes.io/projected/306175a9-b679-4494-a474-96d766f9c018-kube-api-access-hn64s\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.280564 kubelet[2512]: I0702 00:29:07.280447 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-hostproc\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.280564 kubelet[2512]: I0702 00:29:07.280501 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-host-proc-sys-kernel\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.281085 kubelet[2512]: I0702 00:29:07.280962 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-bpf-maps\") pod \"cilium-d7qzq\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " pod="kube-system/cilium-d7qzq" Jul 2 00:29:07.281085 kubelet[2512]: I0702 00:29:07.280992 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm2f\" (UniqueName: \"kubernetes.io/projected/ce9a8c83-d186-4579-b3f7-034bbcbbe538-kube-api-access-9lm2f\") pod \"cilium-operator-6bc8ccdb58-rwlf2\" (UID: \"ce9a8c83-d186-4579-b3f7-034bbcbbe538\") " pod="kube-system/cilium-operator-6bc8ccdb58-rwlf2" Jul 2 00:29:07.397855 kubelet[2512]: E0702 00:29:07.397193 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:07.401282 containerd[1454]: time="2024-07-02T00:29:07.400647284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s2tb7,Uid:043efc9c-4261-4284-a5d9-f9419c8dc109,Namespace:kube-system,Attempt:0,}" Jul 2 00:29:07.412576 kubelet[2512]: E0702 00:29:07.412542 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:07.413104 containerd[1454]: time="2024-07-02T00:29:07.413059587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7qzq,Uid:306175a9-b679-4494-a474-96d766f9c018,Namespace:kube-system,Attempt:0,}" Jul 2 00:29:07.433429 containerd[1454]: time="2024-07-02T00:29:07.432433561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:29:07.433429 containerd[1454]: time="2024-07-02T00:29:07.432562875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:07.433429 containerd[1454]: time="2024-07-02T00:29:07.432600946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:29:07.433429 containerd[1454]: time="2024-07-02T00:29:07.432620965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:07.439419 containerd[1454]: time="2024-07-02T00:29:07.439305472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:29:07.439419 containerd[1454]: time="2024-07-02T00:29:07.439382087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:07.439585 containerd[1454]: time="2024-07-02T00:29:07.439405962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:29:07.439585 containerd[1454]: time="2024-07-02T00:29:07.439433493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:07.456682 systemd[1]: Started cri-containerd-ba4c5a5f24b60ffbe5fc0203dec944fb82f0f7cd5fa14b141acce32b77e303fa.scope - libcontainer container ba4c5a5f24b60ffbe5fc0203dec944fb82f0f7cd5fa14b141acce32b77e303fa. Jul 2 00:29:07.460362 systemd[1]: Started cri-containerd-0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513.scope - libcontainer container 0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513. Jul 2 00:29:07.482837 containerd[1454]: time="2024-07-02T00:29:07.482672796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s2tb7,Uid:043efc9c-4261-4284-a5d9-f9419c8dc109,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba4c5a5f24b60ffbe5fc0203dec944fb82f0f7cd5fa14b141acce32b77e303fa\"" Jul 2 00:29:07.484059 containerd[1454]: time="2024-07-02T00:29:07.484016227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7qzq,Uid:306175a9-b679-4494-a474-96d766f9c018,Namespace:kube-system,Attempt:0,} returns sandbox id \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\"" Jul 2 00:29:07.484240 kubelet[2512]: E0702 00:29:07.484212 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:07.484977 kubelet[2512]: E0702 00:29:07.484915 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:07.486116 containerd[1454]: time="2024-07-02T00:29:07.486073715Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:29:07.487230 containerd[1454]: time="2024-07-02T00:29:07.487140475Z" level=info msg="CreateContainer within sandbox \"ba4c5a5f24b60ffbe5fc0203dec944fb82f0f7cd5fa14b141acce32b77e303fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:29:07.529007 containerd[1454]: time="2024-07-02T00:29:07.528931878Z" level=info msg="CreateContainer within sandbox \"ba4c5a5f24b60ffbe5fc0203dec944fb82f0f7cd5fa14b141acce32b77e303fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"455b6863b81ed0f6d3f0a4f286267a6ffb5e54718604a8ac55092da3b8aea59a\"" Jul 2 00:29:07.529652 containerd[1454]: time="2024-07-02T00:29:07.529615416Z" level=info msg="StartContainer for \"455b6863b81ed0f6d3f0a4f286267a6ffb5e54718604a8ac55092da3b8aea59a\"" Jul 2 00:29:07.564748 systemd[1]: Started cri-containerd-455b6863b81ed0f6d3f0a4f286267a6ffb5e54718604a8ac55092da3b8aea59a.scope - libcontainer container 455b6863b81ed0f6d3f0a4f286267a6ffb5e54718604a8ac55092da3b8aea59a. Jul 2 00:29:07.581848 kubelet[2512]: E0702 00:29:07.581678 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:07.582994 containerd[1454]: time="2024-07-02T00:29:07.582689805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-rwlf2,Uid:ce9a8c83-d186-4579-b3f7-034bbcbbe538,Namespace:kube-system,Attempt:0,}" Jul 2 00:29:07.596452 containerd[1454]: time="2024-07-02T00:29:07.596388803Z" level=info msg="StartContainer for \"455b6863b81ed0f6d3f0a4f286267a6ffb5e54718604a8ac55092da3b8aea59a\" returns successfully" Jul 2 00:29:07.618765 containerd[1454]: time="2024-07-02T00:29:07.615251422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:29:07.618765 containerd[1454]: time="2024-07-02T00:29:07.615536210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:07.618765 containerd[1454]: time="2024-07-02T00:29:07.615573470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:29:07.618765 containerd[1454]: time="2024-07-02T00:29:07.615587836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:07.637047 systemd[1]: Started cri-containerd-2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b.scope - libcontainer container 2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b. Jul 2 00:29:07.678270 containerd[1454]: time="2024-07-02T00:29:07.678139085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-rwlf2,Uid:ce9a8c83-d186-4579-b3f7-034bbcbbe538,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b\"" Jul 2 00:29:07.679179 kubelet[2512]: E0702 00:29:07.679033 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:08.252330 kubelet[2512]: E0702 00:29:08.252264 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:14.440292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575661043.mount: Deactivated successfully. Jul 2 00:29:16.812041 containerd[1454]: time="2024-07-02T00:29:16.811984272Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:29:16.812948 containerd[1454]: time="2024-07-02T00:29:16.812905115Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735303" Jul 2 00:29:16.814324 containerd[1454]: time="2024-07-02T00:29:16.814280962Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:29:16.815929 containerd[1454]: time="2024-07-02T00:29:16.815897483Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.329778954s" Jul 2 00:29:16.815966 containerd[1454]: time="2024-07-02T00:29:16.815929012Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:29:16.816753 containerd[1454]: time="2024-07-02T00:29:16.816510105Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:29:16.817785 containerd[1454]: time="2024-07-02T00:29:16.817748184Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:29:16.830966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908025384.mount: Deactivated successfully. Jul 2 00:29:16.832713 containerd[1454]: time="2024-07-02T00:29:16.832647608Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\"" Jul 2 00:29:16.833140 containerd[1454]: time="2024-07-02T00:29:16.833092765Z" level=info msg="StartContainer for \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\"" Jul 2 00:29:16.868715 systemd[1]: Started cri-containerd-6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0.scope - libcontainer container 6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0. Jul 2 00:29:16.896179 containerd[1454]: time="2024-07-02T00:29:16.896132365Z" level=info msg="StartContainer for \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\" returns successfully" Jul 2 00:29:16.907572 systemd[1]: cri-containerd-6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0.scope: Deactivated successfully. Jul 2 00:29:17.523040 kubelet[2512]: E0702 00:29:17.523002 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:17.575819 kubelet[2512]: I0702 00:29:17.575762 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s2tb7" podStartSLOduration=10.57572675 podCreationTimestamp="2024-07-02 00:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:29:08.304086628 +0000 UTC m=+14.168556185" watchObservedRunningTime="2024-07-02 00:29:17.57572675 +0000 UTC m=+23.440196307" Jul 2 00:29:17.601034 containerd[1454]: time="2024-07-02T00:29:17.600973118Z" level=info msg="shim disconnected" id=6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0 namespace=k8s.io Jul 2 00:29:17.601034 containerd[1454]: time="2024-07-02T00:29:17.601028842Z" level=warning msg="cleaning up after shim disconnected" id=6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0 namespace=k8s.io Jul 2 00:29:17.601034 containerd[1454]: time="2024-07-02T00:29:17.601040364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:17.828943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0-rootfs.mount: Deactivated successfully. Jul 2 00:29:18.446441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987198002.mount: Deactivated successfully. Jul 2 00:29:18.526193 kubelet[2512]: E0702 00:29:18.526151 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:18.528501 containerd[1454]: time="2024-07-02T00:29:18.528294543Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:29:18.544244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177288647.mount: Deactivated successfully. Jul 2 00:29:18.544511 containerd[1454]: time="2024-07-02T00:29:18.544320136Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\"" Jul 2 00:29:18.546211 containerd[1454]: time="2024-07-02T00:29:18.545001537Z" level=info msg="StartContainer for \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\"" Jul 2 00:29:18.576632 systemd[1]: Started cri-containerd-4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f.scope - libcontainer container 4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f. Jul 2 00:29:18.625234 containerd[1454]: time="2024-07-02T00:29:18.625196287Z" level=info msg="StartContainer for \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\" returns successfully" Jul 2 00:29:18.632340 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:29:18.632742 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:29:18.632809 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:29:18.642116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:29:18.642370 systemd[1]: cri-containerd-4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f.scope: Deactivated successfully. Jul 2 00:29:18.702570 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:29:18.706549 containerd[1454]: time="2024-07-02T00:29:18.706229023Z" level=info msg="shim disconnected" id=4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f namespace=k8s.io Jul 2 00:29:18.706549 containerd[1454]: time="2024-07-02T00:29:18.706394244Z" level=warning msg="cleaning up after shim disconnected" id=4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f namespace=k8s.io Jul 2 00:29:18.706549 containerd[1454]: time="2024-07-02T00:29:18.706405345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:18.828386 containerd[1454]: time="2024-07-02T00:29:18.828331533Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:29:18.829374 containerd[1454]: time="2024-07-02T00:29:18.829338246Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Jul 2 00:29:18.830366 containerd[1454]: time="2024-07-02T00:29:18.830325351Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:29:18.831612 containerd[1454]: time="2024-07-02T00:29:18.831584940Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.015038787s" Jul 2 00:29:18.831665 containerd[1454]: time="2024-07-02T00:29:18.831612883Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:29:18.833152 containerd[1454]: time="2024-07-02T00:29:18.833130396Z" level=info msg="CreateContainer within sandbox \"2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:29:18.843432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921564436.mount: Deactivated successfully. Jul 2 00:29:18.844782 containerd[1454]: time="2024-07-02T00:29:18.844748722Z" level=info msg="CreateContainer within sandbox \"2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\"" Jul 2 00:29:18.845135 containerd[1454]: time="2024-07-02T00:29:18.845116634Z" level=info msg="StartContainer for \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\"" Jul 2 00:29:18.873693 systemd[1]: Started cri-containerd-1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c.scope - libcontainer container 1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c. Jul 2 00:29:18.896917 containerd[1454]: time="2024-07-02T00:29:18.896872881Z" level=info msg="StartContainer for \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\" returns successfully" Jul 2 00:29:19.532581 kubelet[2512]: E0702 00:29:19.532540 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:19.534581 kubelet[2512]: E0702 00:29:19.534555 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:19.536148 containerd[1454]: time="2024-07-02T00:29:19.536108605Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:29:19.664627 kubelet[2512]: I0702 00:29:19.664128 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-rwlf2" podStartSLOduration=1.511956123 podCreationTimestamp="2024-07-02 00:29:07 +0000 UTC" firstStartedPulling="2024-07-02 00:29:07.679720025 +0000 UTC m=+13.544189572" lastFinishedPulling="2024-07-02 00:29:18.831857262 +0000 UTC m=+24.696326819" observedRunningTime="2024-07-02 00:29:19.663680554 +0000 UTC m=+25.528150111" watchObservedRunningTime="2024-07-02 00:29:19.66409337 +0000 UTC m=+25.528562928" Jul 2 00:29:19.682864 containerd[1454]: time="2024-07-02T00:29:19.682810389Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\"" Jul 2 00:29:19.683272 containerd[1454]: time="2024-07-02T00:29:19.683232142Z" level=info msg="StartContainer for \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\"" Jul 2 00:29:19.708602 systemd[1]: Started cri-containerd-9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c.scope - libcontainer container 9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c. Jul 2 00:29:19.736510 containerd[1454]: time="2024-07-02T00:29:19.736436969Z" level=info msg="StartContainer for \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\" returns successfully" Jul 2 00:29:19.736604 systemd[1]: cri-containerd-9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c.scope: Deactivated successfully. Jul 2 00:29:19.761621 containerd[1454]: time="2024-07-02T00:29:19.761563668Z" level=info msg="shim disconnected" id=9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c namespace=k8s.io Jul 2 00:29:19.761621 containerd[1454]: time="2024-07-02T00:29:19.761612680Z" level=warning msg="cleaning up after shim disconnected" id=9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c namespace=k8s.io Jul 2 00:29:19.761621 containerd[1454]: time="2024-07-02T00:29:19.761621647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:19.829570 systemd[1]: run-containerd-runc-k8s.io-1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c-runc.2rK7gY.mount: Deactivated successfully. Jul 2 00:29:20.537378 kubelet[2512]: E0702 00:29:20.537325 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:20.537378 kubelet[2512]: E0702 00:29:20.537338 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:20.539667 containerd[1454]: time="2024-07-02T00:29:20.539624065Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:29:20.560784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923046137.mount: Deactivated successfully. Jul 2 00:29:20.565305 containerd[1454]: time="2024-07-02T00:29:20.565253253Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\"" Jul 2 00:29:20.566144 containerd[1454]: time="2024-07-02T00:29:20.565825038Z" level=info msg="StartContainer for \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\"" Jul 2 00:29:20.601612 systemd[1]: Started cri-containerd-04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a.scope - libcontainer container 04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a. Jul 2 00:29:20.624274 systemd[1]: cri-containerd-04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a.scope: Deactivated successfully. Jul 2 00:29:20.633876 containerd[1454]: time="2024-07-02T00:29:20.633835233Z" level=info msg="StartContainer for \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\" returns successfully" Jul 2 00:29:20.656551 containerd[1454]: time="2024-07-02T00:29:20.656485570Z" level=info msg="shim disconnected" id=04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a namespace=k8s.io Jul 2 00:29:20.656551 containerd[1454]: time="2024-07-02T00:29:20.656541967Z" level=warning msg="cleaning up after shim disconnected" id=04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a namespace=k8s.io Jul 2 00:29:20.656551 containerd[1454]: time="2024-07-02T00:29:20.656550363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:20.829559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a-rootfs.mount: Deactivated successfully. Jul 2 00:29:21.539881 kubelet[2512]: E0702 00:29:21.539852 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:21.542822 containerd[1454]: time="2024-07-02T00:29:21.542786282Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:29:21.836309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851826095.mount: Deactivated successfully. Jul 2 00:29:21.846402 containerd[1454]: time="2024-07-02T00:29:21.846364775Z" level=info msg="CreateContainer within sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\"" Jul 2 00:29:21.847187 containerd[1454]: time="2024-07-02T00:29:21.846906343Z" level=info msg="StartContainer for \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\"" Jul 2 00:29:21.878618 systemd[1]: Started cri-containerd-7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f.scope - libcontainer container 7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f. Jul 2 00:29:21.906209 containerd[1454]: time="2024-07-02T00:29:21.906151799Z" level=info msg="StartContainer for \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\" returns successfully" Jul 2 00:29:22.052045 kubelet[2512]: I0702 00:29:22.052008 2512 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:29:22.087703 kubelet[2512]: I0702 00:29:22.087577 2512 topology_manager.go:215] "Topology Admit Handler" podUID="b942848b-83d6-4387-b0ca-708cb226e77d" podNamespace="kube-system" podName="coredns-5dd5756b68-h8bk6" Jul 2 00:29:22.090958 kubelet[2512]: I0702 00:29:22.089860 2512 topology_manager.go:215] "Topology Admit Handler" podUID="abd2b5bb-51aa-4337-bc72-8227b596c2f4" podNamespace="kube-system" podName="coredns-5dd5756b68-hxgvg" Jul 2 00:29:22.100238 systemd[1]: Created slice kubepods-burstable-podb942848b_83d6_4387_b0ca_708cb226e77d.slice - libcontainer container kubepods-burstable-podb942848b_83d6_4387_b0ca_708cb226e77d.slice. Jul 2 00:29:22.107662 systemd[1]: Created slice kubepods-burstable-podabd2b5bb_51aa_4337_bc72_8227b596c2f4.slice - libcontainer container kubepods-burstable-podabd2b5bb_51aa_4337_bc72_8227b596c2f4.slice. Jul 2 00:29:22.189317 kubelet[2512]: I0702 00:29:22.189286 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgns\" (UniqueName: \"kubernetes.io/projected/abd2b5bb-51aa-4337-bc72-8227b596c2f4-kube-api-access-lhgns\") pod \"coredns-5dd5756b68-hxgvg\" (UID: \"abd2b5bb-51aa-4337-bc72-8227b596c2f4\") " pod="kube-system/coredns-5dd5756b68-hxgvg" Jul 2 00:29:22.189463 kubelet[2512]: I0702 00:29:22.189342 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b942848b-83d6-4387-b0ca-708cb226e77d-config-volume\") pod \"coredns-5dd5756b68-h8bk6\" (UID: \"b942848b-83d6-4387-b0ca-708cb226e77d\") " pod="kube-system/coredns-5dd5756b68-h8bk6" Jul 2 00:29:22.189463 kubelet[2512]: I0702 00:29:22.189372 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abd2b5bb-51aa-4337-bc72-8227b596c2f4-config-volume\") pod \"coredns-5dd5756b68-hxgvg\" (UID: \"abd2b5bb-51aa-4337-bc72-8227b596c2f4\") " pod="kube-system/coredns-5dd5756b68-hxgvg" Jul 2 00:29:22.189463 kubelet[2512]: I0702 00:29:22.189397 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrzz\" (UniqueName: \"kubernetes.io/projected/b942848b-83d6-4387-b0ca-708cb226e77d-kube-api-access-jvrzz\") pod \"coredns-5dd5756b68-h8bk6\" (UID: \"b942848b-83d6-4387-b0ca-708cb226e77d\") " pod="kube-system/coredns-5dd5756b68-h8bk6" Jul 2 00:29:22.405695 kubelet[2512]: E0702 00:29:22.405354 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:22.410319 kubelet[2512]: E0702 00:29:22.410300 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:22.420316 containerd[1454]: time="2024-07-02T00:29:22.420277826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h8bk6,Uid:b942848b-83d6-4387-b0ca-708cb226e77d,Namespace:kube-system,Attempt:0,}" Jul 2 00:29:22.420458 containerd[1454]: time="2024-07-02T00:29:22.420278036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hxgvg,Uid:abd2b5bb-51aa-4337-bc72-8227b596c2f4,Namespace:kube-system,Attempt:0,}" Jul 2 00:29:22.544013 kubelet[2512]: E0702 00:29:22.543976 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:22.554386 kubelet[2512]: I0702 00:29:22.554314 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d7qzq" podStartSLOduration=6.223668429 podCreationTimestamp="2024-07-02 00:29:07 +0000 UTC" firstStartedPulling="2024-07-02 00:29:07.485685944 +0000 UTC m=+13.350155501" lastFinishedPulling="2024-07-02 00:29:16.8162948 +0000 UTC m=+22.680764357" observedRunningTime="2024-07-02 00:29:22.554088 +0000 UTC m=+28.418557577" watchObservedRunningTime="2024-07-02 00:29:22.554277285 +0000 UTC m=+28.418746842" Jul 2 00:29:22.837925 systemd[1]: run-containerd-runc-k8s.io-7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f-runc.SnejsL.mount: Deactivated successfully. Jul 2 00:29:23.545943 kubelet[2512]: E0702 00:29:23.545913 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:23.760561 systemd[1]: Started sshd@7-10.0.0.153:22-10.0.0.1:41590.service - OpenSSH per-connection server daemon (10.0.0.1:41590). Jul 2 00:29:23.804095 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 41590 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:23.805581 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:23.809729 systemd-logind[1436]: New session 8 of user core. Jul 2 00:29:23.829708 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:29:23.989701 sshd[3352]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:23.993901 systemd[1]: sshd@7-10.0.0.153:22-10.0.0.1:41590.service: Deactivated successfully. Jul 2 00:29:23.995841 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:29:23.996435 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:29:23.997329 systemd-logind[1436]: Removed session 8. Jul 2 00:29:24.069141 systemd-networkd[1370]: cilium_host: Link UP Jul 2 00:29:24.069331 systemd-networkd[1370]: cilium_net: Link UP Jul 2 00:29:24.069335 systemd-networkd[1370]: cilium_net: Gained carrier Jul 2 00:29:24.069534 systemd-networkd[1370]: cilium_host: Gained carrier Jul 2 00:29:24.069773 systemd-networkd[1370]: cilium_host: Gained IPv6LL Jul 2 00:29:24.115664 systemd-networkd[1370]: cilium_net: Gained IPv6LL Jul 2 00:29:24.181307 systemd-networkd[1370]: cilium_vxlan: Link UP Jul 2 00:29:24.181512 systemd-networkd[1370]: cilium_vxlan: Gained carrier Jul 2 00:29:24.397510 kernel: NET: Registered PF_ALG protocol family Jul 2 00:29:24.547746 kubelet[2512]: E0702 00:29:24.547713 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:25.042493 systemd-networkd[1370]: lxc_health: Link UP Jul 2 00:29:25.048859 systemd-networkd[1370]: lxc_health: Gained carrier Jul 2 00:29:25.415580 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Jul 2 00:29:25.472605 systemd-networkd[1370]: lxc755dc8d099c9: Link UP Jul 2 00:29:25.480540 kernel: eth0: renamed from tmpbbc9b Jul 2 00:29:25.489276 systemd-networkd[1370]: lxcc3248cf5127e: Link UP Jul 2 00:29:25.498504 kernel: eth0: renamed from tmp06afa Jul 2 00:29:25.502483 systemd-networkd[1370]: lxc755dc8d099c9: Gained carrier Jul 2 00:29:25.503645 systemd-networkd[1370]: lxcc3248cf5127e: Gained carrier Jul 2 00:29:25.549652 kubelet[2512]: E0702 00:29:25.549615 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:26.179652 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jul 2 00:29:26.551372 kubelet[2512]: E0702 00:29:26.551345 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:26.947578 systemd-networkd[1370]: lxcc3248cf5127e: Gained IPv6LL Jul 2 00:29:27.011575 systemd-networkd[1370]: lxc755dc8d099c9: Gained IPv6LL Jul 2 00:29:27.553215 kubelet[2512]: E0702 00:29:27.553185 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:28.854306 containerd[1454]: time="2024-07-02T00:29:28.854220154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:29:28.854306 containerd[1454]: time="2024-07-02T00:29:28.854266301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:28.854768 containerd[1454]: time="2024-07-02T00:29:28.854278514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:29:28.854768 containerd[1454]: time="2024-07-02T00:29:28.854287581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:28.857551 containerd[1454]: time="2024-07-02T00:29:28.855689905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:29:28.857551 containerd[1454]: time="2024-07-02T00:29:28.855748765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:28.857551 containerd[1454]: time="2024-07-02T00:29:28.855760988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:29:28.857551 containerd[1454]: time="2024-07-02T00:29:28.855771958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:28.885699 systemd[1]: Started cri-containerd-06afa53aeb6ec8c8ed261be624526152a9d9498a0ec27be6b7f4bb30ba073e4a.scope - libcontainer container 06afa53aeb6ec8c8ed261be624526152a9d9498a0ec27be6b7f4bb30ba073e4a. Jul 2 00:29:28.887288 systemd[1]: Started cri-containerd-bbc9b482e43e430e9ae3a40b0f7955ed622d3887c573c67f2da29d02b880765a.scope - libcontainer container bbc9b482e43e430e9ae3a40b0f7955ed622d3887c573c67f2da29d02b880765a. Jul 2 00:29:28.897664 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:29:28.899202 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:29:28.922902 containerd[1454]: time="2024-07-02T00:29:28.922861091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hxgvg,Uid:abd2b5bb-51aa-4337-bc72-8227b596c2f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"06afa53aeb6ec8c8ed261be624526152a9d9498a0ec27be6b7f4bb30ba073e4a\"" Jul 2 00:29:28.923830 kubelet[2512]: E0702 00:29:28.923801 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:28.929722 containerd[1454]: time="2024-07-02T00:29:28.929669259Z" level=info msg="CreateContainer within sandbox \"06afa53aeb6ec8c8ed261be624526152a9d9498a0ec27be6b7f4bb30ba073e4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:29:28.930547 containerd[1454]: time="2024-07-02T00:29:28.930515709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-h8bk6,Uid:b942848b-83d6-4387-b0ca-708cb226e77d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbc9b482e43e430e9ae3a40b0f7955ed622d3887c573c67f2da29d02b880765a\"" Jul 2 00:29:28.931660 kubelet[2512]: E0702 00:29:28.931635 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:28.934224 containerd[1454]: time="2024-07-02T00:29:28.933969597Z" level=info msg="CreateContainer within sandbox \"bbc9b482e43e430e9ae3a40b0f7955ed622d3887c573c67f2da29d02b880765a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:29:28.950356 containerd[1454]: time="2024-07-02T00:29:28.950303616Z" level=info msg="CreateContainer within sandbox \"bbc9b482e43e430e9ae3a40b0f7955ed622d3887c573c67f2da29d02b880765a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2dd77f3e6102575e88128e458900a17cec3a7a4d1239211f8f6d174c1ab590a\"" Jul 2 00:29:28.950865 containerd[1454]: time="2024-07-02T00:29:28.950827360Z" level=info msg="StartContainer for \"c2dd77f3e6102575e88128e458900a17cec3a7a4d1239211f8f6d174c1ab590a\"" Jul 2 00:29:28.960565 containerd[1454]: time="2024-07-02T00:29:28.960515948Z" level=info msg="CreateContainer within sandbox \"06afa53aeb6ec8c8ed261be624526152a9d9498a0ec27be6b7f4bb30ba073e4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a414c52f762317f0f6018f84ae93ca3b303c4a84d6fd79a9e9d5f0bc139e2223\"" Jul 2 00:29:28.960935 containerd[1454]: time="2024-07-02T00:29:28.960909157Z" level=info msg="StartContainer for \"a414c52f762317f0f6018f84ae93ca3b303c4a84d6fd79a9e9d5f0bc139e2223\"" Jul 2 00:29:28.976606 systemd[1]: Started cri-containerd-c2dd77f3e6102575e88128e458900a17cec3a7a4d1239211f8f6d174c1ab590a.scope - libcontainer container c2dd77f3e6102575e88128e458900a17cec3a7a4d1239211f8f6d174c1ab590a. Jul 2 00:29:28.991608 systemd[1]: Started cri-containerd-a414c52f762317f0f6018f84ae93ca3b303c4a84d6fd79a9e9d5f0bc139e2223.scope - libcontainer container a414c52f762317f0f6018f84ae93ca3b303c4a84d6fd79a9e9d5f0bc139e2223. Jul 2 00:29:28.999846 systemd[1]: Started sshd@8-10.0.0.153:22-10.0.0.1:60412.service - OpenSSH per-connection server daemon (10.0.0.1:60412). Jul 2 00:29:29.018245 containerd[1454]: time="2024-07-02T00:29:29.018067794Z" level=info msg="StartContainer for \"c2dd77f3e6102575e88128e458900a17cec3a7a4d1239211f8f6d174c1ab590a\" returns successfully" Jul 2 00:29:29.030896 containerd[1454]: time="2024-07-02T00:29:29.030851905Z" level=info msg="StartContainer for \"a414c52f762317f0f6018f84ae93ca3b303c4a84d6fd79a9e9d5f0bc139e2223\" returns successfully" Jul 2 00:29:29.046851 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 60412 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:29.048648 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:29.056428 systemd-logind[1436]: New session 9 of user core. Jul 2 00:29:29.064694 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:29:29.259430 sshd[3883]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:29.263414 systemd[1]: sshd@8-10.0.0.153:22-10.0.0.1:60412.service: Deactivated successfully. Jul 2 00:29:29.265524 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:29:29.266261 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:29:29.267435 systemd-logind[1436]: Removed session 9. Jul 2 00:29:29.558432 kubelet[2512]: E0702 00:29:29.558262 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:29.560767 kubelet[2512]: E0702 00:29:29.560710 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:29.567674 kubelet[2512]: I0702 00:29:29.567635 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hxgvg" podStartSLOduration=22.567583037 podCreationTimestamp="2024-07-02 00:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:29:29.566501296 +0000 UTC m=+35.430970873" watchObservedRunningTime="2024-07-02 00:29:29.567583037 +0000 UTC m=+35.432052594" Jul 2 00:29:29.576131 kubelet[2512]: I0702 00:29:29.575379 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h8bk6" podStartSLOduration=22.575330828 podCreationTimestamp="2024-07-02 00:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:29:29.575309899 +0000 UTC m=+35.439779456" watchObservedRunningTime="2024-07-02 00:29:29.575330828 +0000 UTC m=+35.439800396" Jul 2 00:29:30.562750 kubelet[2512]: E0702 00:29:30.562709 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:30.562750 kubelet[2512]: E0702 00:29:30.562756 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:31.564255 kubelet[2512]: E0702 00:29:31.564188 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:31.564668 kubelet[2512]: E0702 00:29:31.564294 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:34.272552 systemd[1]: Started sshd@9-10.0.0.153:22-10.0.0.1:60422.service - OpenSSH per-connection server daemon (10.0.0.1:60422). Jul 2 00:29:34.309964 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 60422 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:34.311328 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:34.315040 systemd-logind[1436]: New session 10 of user core. Jul 2 00:29:34.324587 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:29:34.429950 sshd[3934]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:34.433679 systemd[1]: sshd@9-10.0.0.153:22-10.0.0.1:60422.service: Deactivated successfully. Jul 2 00:29:34.435670 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:29:34.436281 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:29:34.437091 systemd-logind[1436]: Removed session 10. Jul 2 00:29:39.444083 systemd[1]: Started sshd@10-10.0.0.153:22-10.0.0.1:59452.service - OpenSSH per-connection server daemon (10.0.0.1:59452). Jul 2 00:29:39.483639 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 59452 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:39.485174 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:39.489106 systemd-logind[1436]: New session 11 of user core. Jul 2 00:29:39.494655 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:29:39.597987 sshd[3955]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:39.608377 systemd[1]: sshd@10-10.0.0.153:22-10.0.0.1:59452.service: Deactivated successfully. Jul 2 00:29:39.610213 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:29:39.611770 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:29:39.613067 systemd[1]: Started sshd@11-10.0.0.153:22-10.0.0.1:59468.service - OpenSSH per-connection server daemon (10.0.0.1:59468). Jul 2 00:29:39.613976 systemd-logind[1436]: Removed session 11. Jul 2 00:29:39.649231 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 59468 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:39.650613 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:39.654221 systemd-logind[1436]: New session 12 of user core. Jul 2 00:29:39.665591 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:29:40.327575 sshd[3970]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:40.341284 systemd[1]: sshd@11-10.0.0.153:22-10.0.0.1:59468.service: Deactivated successfully. Jul 2 00:29:40.344156 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:29:40.345743 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:29:40.358778 systemd[1]: Started sshd@12-10.0.0.153:22-10.0.0.1:59478.service - OpenSSH per-connection server daemon (10.0.0.1:59478). Jul 2 00:29:40.359751 systemd-logind[1436]: Removed session 12. Jul 2 00:29:40.394065 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 59478 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:40.395427 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:40.399266 systemd-logind[1436]: New session 13 of user core. Jul 2 00:29:40.411593 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:29:40.522270 sshd[3983]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:40.526003 systemd[1]: sshd@12-10.0.0.153:22-10.0.0.1:59478.service: Deactivated successfully. Jul 2 00:29:40.528047 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:29:40.528782 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:29:40.529691 systemd-logind[1436]: Removed session 13. Jul 2 00:29:45.535624 systemd[1]: Started sshd@13-10.0.0.153:22-10.0.0.1:59490.service - OpenSSH per-connection server daemon (10.0.0.1:59490). Jul 2 00:29:45.572439 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 59490 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:45.573881 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:45.577486 systemd-logind[1436]: New session 14 of user core. Jul 2 00:29:45.583589 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:29:45.682716 sshd[3999]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:45.686160 systemd[1]: sshd@13-10.0.0.153:22-10.0.0.1:59490.service: Deactivated successfully. Jul 2 00:29:45.688140 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:29:45.688753 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:29:45.689671 systemd-logind[1436]: Removed session 14. Jul 2 00:29:50.695427 systemd[1]: Started sshd@14-10.0.0.153:22-10.0.0.1:60694.service - OpenSSH per-connection server daemon (10.0.0.1:60694). Jul 2 00:29:50.731421 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 60694 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:50.732712 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:50.736344 systemd-logind[1436]: New session 15 of user core. Jul 2 00:29:50.746604 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:29:50.856204 sshd[4014]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:50.867439 systemd[1]: sshd@14-10.0.0.153:22-10.0.0.1:60694.service: Deactivated successfully. Jul 2 00:29:50.869465 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:29:50.871326 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:29:50.880913 systemd[1]: Started sshd@15-10.0.0.153:22-10.0.0.1:60696.service - OpenSSH per-connection server daemon (10.0.0.1:60696). Jul 2 00:29:50.882035 systemd-logind[1436]: Removed session 15. Jul 2 00:29:50.915088 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 60696 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:50.916713 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:50.921184 systemd-logind[1436]: New session 16 of user core. Jul 2 00:29:50.930619 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:29:51.106697 sshd[4028]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:51.117255 systemd[1]: sshd@15-10.0.0.153:22-10.0.0.1:60696.service: Deactivated successfully. Jul 2 00:29:51.119291 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:29:51.120813 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:29:51.122055 systemd[1]: Started sshd@16-10.0.0.153:22-10.0.0.1:60712.service - OpenSSH per-connection server daemon (10.0.0.1:60712). Jul 2 00:29:51.122871 systemd-logind[1436]: Removed session 16. Jul 2 00:29:51.171837 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 60712 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:51.173194 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:51.177023 systemd-logind[1436]: New session 17 of user core. Jul 2 00:29:51.192593 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:29:51.993804 sshd[4040]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:52.005677 systemd[1]: sshd@16-10.0.0.153:22-10.0.0.1:60712.service: Deactivated successfully. Jul 2 00:29:52.007821 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:29:52.009658 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:29:52.019958 systemd[1]: Started sshd@17-10.0.0.153:22-10.0.0.1:60716.service - OpenSSH per-connection server daemon (10.0.0.1:60716). Jul 2 00:29:52.020875 systemd-logind[1436]: Removed session 17. Jul 2 00:29:52.052500 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 60716 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:52.053888 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:52.057801 systemd-logind[1436]: New session 18 of user core. Jul 2 00:29:52.067633 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:29:52.501999 sshd[4060]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:52.513405 systemd[1]: sshd@17-10.0.0.153:22-10.0.0.1:60716.service: Deactivated successfully. Jul 2 00:29:52.515188 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:29:52.516908 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:29:52.518209 systemd[1]: Started sshd@18-10.0.0.153:22-10.0.0.1:60730.service - OpenSSH per-connection server daemon (10.0.0.1:60730). Jul 2 00:29:52.519011 systemd-logind[1436]: Removed session 18. Jul 2 00:29:52.559515 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 60730 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:52.560983 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:52.564974 systemd-logind[1436]: New session 19 of user core. Jul 2 00:29:52.575589 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:29:52.683689 sshd[4073]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:52.687984 systemd[1]: sshd@18-10.0.0.153:22-10.0.0.1:60730.service: Deactivated successfully. Jul 2 00:29:52.690123 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:29:52.690844 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:29:52.691658 systemd-logind[1436]: Removed session 19. Jul 2 00:29:57.695416 systemd[1]: Started sshd@19-10.0.0.153:22-10.0.0.1:60742.service - OpenSSH per-connection server daemon (10.0.0.1:60742). Jul 2 00:29:57.731902 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 60742 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:29:57.733266 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:57.736884 systemd-logind[1436]: New session 20 of user core. Jul 2 00:29:57.742675 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:29:57.845148 sshd[4090]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:57.849328 systemd[1]: sshd@19-10.0.0.153:22-10.0.0.1:60742.service: Deactivated successfully. Jul 2 00:29:57.851775 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:29:57.852483 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:29:57.853303 systemd-logind[1436]: Removed session 20. Jul 2 00:30:02.860447 systemd[1]: Started sshd@20-10.0.0.153:22-10.0.0.1:55196.service - OpenSSH per-connection server daemon (10.0.0.1:55196). Jul 2 00:30:02.897661 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 55196 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:30:02.899187 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:02.902925 systemd-logind[1436]: New session 21 of user core. Jul 2 00:30:02.910593 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:30:03.011731 sshd[4107]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:03.015715 systemd[1]: sshd@20-10.0.0.153:22-10.0.0.1:55196.service: Deactivated successfully. Jul 2 00:30:03.017724 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:30:03.018340 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:30:03.019130 systemd-logind[1436]: Removed session 21. Jul 2 00:30:04.221941 kubelet[2512]: E0702 00:30:04.221890 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:08.031338 systemd[1]: Started sshd@21-10.0.0.153:22-10.0.0.1:54382.service - OpenSSH per-connection server daemon (10.0.0.1:54382). Jul 2 00:30:08.067623 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 54382 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:30:08.068912 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:08.072464 systemd-logind[1436]: New session 22 of user core. Jul 2 00:30:08.088588 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:30:08.189847 sshd[4124]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:08.194062 systemd[1]: sshd@21-10.0.0.153:22-10.0.0.1:54382.service: Deactivated successfully. Jul 2 00:30:08.196171 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:30:08.196868 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:30:08.197724 systemd-logind[1436]: Removed session 22. Jul 2 00:30:10.221869 kubelet[2512]: E0702 00:30:10.221821 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:13.216845 systemd[1]: Started sshd@22-10.0.0.153:22-10.0.0.1:54390.service - OpenSSH per-connection server daemon (10.0.0.1:54390). Jul 2 00:30:13.253020 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 54390 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:30:13.254505 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:13.258328 systemd-logind[1436]: New session 23 of user core. Jul 2 00:30:13.273659 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:30:13.373142 sshd[4138]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:13.383009 systemd[1]: sshd@22-10.0.0.153:22-10.0.0.1:54390.service: Deactivated successfully. Jul 2 00:30:13.384595 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:30:13.386063 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:30:13.390980 systemd[1]: Started sshd@23-10.0.0.153:22-10.0.0.1:54402.service - OpenSSH per-connection server daemon (10.0.0.1:54402). Jul 2 00:30:13.391693 systemd-logind[1436]: Removed session 23. Jul 2 00:30:13.424222 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 54402 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:30:13.425493 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:13.429216 systemd-logind[1436]: New session 24 of user core. Jul 2 00:30:13.435585 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:30:14.819973 containerd[1454]: time="2024-07-02T00:30:14.819882767Z" level=info msg="StopContainer for \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\" with timeout 30 (s)" Jul 2 00:30:14.820541 containerd[1454]: time="2024-07-02T00:30:14.820508536Z" level=info msg="Stop container \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\" with signal terminated" Jul 2 00:30:14.832959 systemd[1]: cri-containerd-1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c.scope: Deactivated successfully. Jul 2 00:30:14.841786 containerd[1454]: time="2024-07-02T00:30:14.841733459Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:30:14.849198 containerd[1454]: time="2024-07-02T00:30:14.849166197Z" level=info msg="StopContainer for \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\" with timeout 2 (s)" Jul 2 00:30:14.849822 containerd[1454]: time="2024-07-02T00:30:14.849804329Z" level=info msg="Stop container \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\" with signal terminated" Jul 2 00:30:14.855609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c-rootfs.mount: Deactivated successfully. Jul 2 00:30:14.856289 systemd-networkd[1370]: lxc_health: Link DOWN Jul 2 00:30:14.856294 systemd-networkd[1370]: lxc_health: Lost carrier Jul 2 00:30:14.862168 containerd[1454]: time="2024-07-02T00:30:14.861980853Z" level=info msg="shim disconnected" id=1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c namespace=k8s.io Jul 2 00:30:14.862168 containerd[1454]: time="2024-07-02T00:30:14.862029906Z" level=warning msg="cleaning up after shim disconnected" id=1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c namespace=k8s.io Jul 2 00:30:14.862168 containerd[1454]: time="2024-07-02T00:30:14.862039735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:14.878908 containerd[1454]: time="2024-07-02T00:30:14.878852461Z" level=info msg="StopContainer for \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\" returns successfully" Jul 2 00:30:14.879894 systemd[1]: cri-containerd-7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f.scope: Deactivated successfully. Jul 2 00:30:14.880323 systemd[1]: cri-containerd-7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f.scope: Consumed 6.673s CPU time. Jul 2 00:30:14.882431 containerd[1454]: time="2024-07-02T00:30:14.882358425Z" level=info msg="StopPodSandbox for \"2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b\"" Jul 2 00:30:14.897199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f-rootfs.mount: Deactivated successfully. Jul 2 00:30:14.898017 containerd[1454]: time="2024-07-02T00:30:14.882399173Z" level=info msg="Container to stop \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:14.899945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b-shm.mount: Deactivated successfully. Jul 2 00:30:14.902746 containerd[1454]: time="2024-07-02T00:30:14.902665583Z" level=info msg="shim disconnected" id=7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f namespace=k8s.io Jul 2 00:30:14.902746 containerd[1454]: time="2024-07-02T00:30:14.902716870Z" level=warning msg="cleaning up after shim disconnected" id=7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f namespace=k8s.io Jul 2 00:30:14.902746 containerd[1454]: time="2024-07-02T00:30:14.902725467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:14.905843 systemd[1]: cri-containerd-2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b.scope: Deactivated successfully. Jul 2 00:30:14.916666 containerd[1454]: time="2024-07-02T00:30:14.916607663Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:30:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:30:14.920706 containerd[1454]: time="2024-07-02T00:30:14.920670766Z" level=info msg="StopContainer for \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\" returns successfully" Jul 2 00:30:14.921595 containerd[1454]: time="2024-07-02T00:30:14.921494251Z" level=info msg="StopPodSandbox for \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\"" Jul 2 00:30:14.921595 containerd[1454]: time="2024-07-02T00:30:14.921528426Z" level=info msg="Container to stop \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:14.921684 containerd[1454]: time="2024-07-02T00:30:14.921570266Z" level=info msg="Container to stop \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:14.921684 containerd[1454]: time="2024-07-02T00:30:14.921632534Z" level=info msg="Container to stop \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:14.921684 containerd[1454]: time="2024-07-02T00:30:14.921646300Z" level=info msg="Container to stop \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:14.921684 containerd[1454]: time="2024-07-02T00:30:14.921658283Z" level=info msg="Container to stop \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:14.924496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513-shm.mount: Deactivated successfully. Jul 2 00:30:14.930446 systemd[1]: cri-containerd-0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513.scope: Deactivated successfully. Jul 2 00:30:14.934102 containerd[1454]: time="2024-07-02T00:30:14.934032202Z" level=info msg="shim disconnected" id=2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b namespace=k8s.io Jul 2 00:30:14.934102 containerd[1454]: time="2024-07-02T00:30:14.934081967Z" level=warning msg="cleaning up after shim disconnected" id=2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b namespace=k8s.io Jul 2 00:30:14.934102 containerd[1454]: time="2024-07-02T00:30:14.934098638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:14.950538 containerd[1454]: time="2024-07-02T00:30:14.950464214Z" level=info msg="TearDown network for sandbox \"2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b\" successfully" Jul 2 00:30:14.950538 containerd[1454]: time="2024-07-02T00:30:14.950526984Z" level=info msg="StopPodSandbox for \"2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b\" returns successfully" Jul 2 00:30:14.962201 containerd[1454]: time="2024-07-02T00:30:14.962120219Z" level=info msg="shim disconnected" id=0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513 namespace=k8s.io Jul 2 00:30:14.962201 containerd[1454]: time="2024-07-02T00:30:14.962174683Z" level=warning msg="cleaning up after shim disconnected" id=0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513 namespace=k8s.io Jul 2 00:30:14.962201 containerd[1454]: time="2024-07-02T00:30:14.962183780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:14.975365 containerd[1454]: time="2024-07-02T00:30:14.975315649Z" level=info msg="TearDown network for sandbox \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" successfully" Jul 2 00:30:14.975365 containerd[1454]: time="2024-07-02T00:30:14.975355706Z" level=info msg="StopPodSandbox for \"0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513\" returns successfully" Jul 2 00:30:15.065446 kubelet[2512]: I0702 00:30:15.065385 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cilium-cgroup\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.065446 kubelet[2512]: I0702 00:30:15.065426 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-host-proc-sys-kernel\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.065446 kubelet[2512]: I0702 00:30:15.065451 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-bpf-maps\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.065446 kubelet[2512]: I0702 00:30:15.065491 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-host-proc-sys-net\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066082 kubelet[2512]: I0702 00:30:15.065521 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce9a8c83-d186-4579-b3f7-034bbcbbe538-cilium-config-path\") pod \"ce9a8c83-d186-4579-b3f7-034bbcbbe538\" (UID: \"ce9a8c83-d186-4579-b3f7-034bbcbbe538\") " Jul 2 00:30:15.066082 kubelet[2512]: I0702 00:30:15.065530 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.066082 kubelet[2512]: I0702 00:30:15.065549 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/306175a9-b679-4494-a474-96d766f9c018-clustermesh-secrets\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066082 kubelet[2512]: I0702 00:30:15.065576 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/306175a9-b679-4494-a474-96d766f9c018-hubble-tls\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066082 kubelet[2512]: I0702 00:30:15.065584 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.066271 kubelet[2512]: I0702 00:30:15.065601 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cni-path\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066271 kubelet[2512]: I0702 00:30:15.065581 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.066271 kubelet[2512]: I0702 00:30:15.065608 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.066271 kubelet[2512]: I0702 00:30:15.065623 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-hostproc\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066271 kubelet[2512]: I0702 00:30:15.065652 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/306175a9-b679-4494-a474-96d766f9c018-cilium-config-path\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066434 kubelet[2512]: I0702 00:30:15.065677 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn64s\" (UniqueName: \"kubernetes.io/projected/306175a9-b679-4494-a474-96d766f9c018-kube-api-access-hn64s\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066434 kubelet[2512]: I0702 00:30:15.065698 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-xtables-lock\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066434 kubelet[2512]: I0702 00:30:15.065718 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cilium-run\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066434 kubelet[2512]: I0702 00:30:15.065740 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-etc-cni-netd\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066434 kubelet[2512]: I0702 00:30:15.065764 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-lib-modules\") pod \"306175a9-b679-4494-a474-96d766f9c018\" (UID: \"306175a9-b679-4494-a474-96d766f9c018\") " Jul 2 00:30:15.066434 kubelet[2512]: I0702 00:30:15.065790 2512 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lm2f\" (UniqueName: \"kubernetes.io/projected/ce9a8c83-d186-4579-b3f7-034bbcbbe538-kube-api-access-9lm2f\") pod \"ce9a8c83-d186-4579-b3f7-034bbcbbe538\" (UID: \"ce9a8c83-d186-4579-b3f7-034bbcbbe538\") " Jul 2 00:30:15.066663 kubelet[2512]: I0702 00:30:15.065830 2512 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.066663 kubelet[2512]: I0702 00:30:15.065845 2512 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.066663 kubelet[2512]: I0702 00:30:15.065858 2512 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.066663 kubelet[2512]: I0702 00:30:15.065872 2512 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.067462 kubelet[2512]: I0702 00:30:15.067428 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.067822 kubelet[2512]: I0702 00:30:15.067562 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.067822 kubelet[2512]: I0702 00:30:15.067603 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.067822 kubelet[2512]: I0702 00:30:15.067752 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.068003 kubelet[2512]: I0702 00:30:15.067978 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cni-path" (OuterVolumeSpecName: "cni-path") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.068109 kubelet[2512]: I0702 00:30:15.068081 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-hostproc" (OuterVolumeSpecName: "hostproc") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:15.072013 kubelet[2512]: I0702 00:30:15.070164 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306175a9-b679-4494-a474-96d766f9c018-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:30:15.072013 kubelet[2512]: I0702 00:30:15.070756 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306175a9-b679-4494-a474-96d766f9c018-kube-api-access-hn64s" (OuterVolumeSpecName: "kube-api-access-hn64s") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "kube-api-access-hn64s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:30:15.072013 kubelet[2512]: I0702 00:30:15.070914 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9a8c83-d186-4579-b3f7-034bbcbbe538-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce9a8c83-d186-4579-b3f7-034bbcbbe538" (UID: "ce9a8c83-d186-4579-b3f7-034bbcbbe538"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:30:15.073353 kubelet[2512]: I0702 00:30:15.073315 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306175a9-b679-4494-a474-96d766f9c018-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:30:15.073626 kubelet[2512]: I0702 00:30:15.073597 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9a8c83-d186-4579-b3f7-034bbcbbe538-kube-api-access-9lm2f" (OuterVolumeSpecName: "kube-api-access-9lm2f") pod "ce9a8c83-d186-4579-b3f7-034bbcbbe538" (UID: "ce9a8c83-d186-4579-b3f7-034bbcbbe538"). InnerVolumeSpecName "kube-api-access-9lm2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:30:15.073712 kubelet[2512]: I0702 00:30:15.073689 2512 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306175a9-b679-4494-a474-96d766f9c018-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "306175a9-b679-4494-a474-96d766f9c018" (UID: "306175a9-b679-4494-a474-96d766f9c018"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:30:15.166574 kubelet[2512]: I0702 00:30:15.166531 2512 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166574 kubelet[2512]: I0702 00:30:15.166559 2512 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/306175a9-b679-4494-a474-96d766f9c018-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166574 kubelet[2512]: I0702 00:30:15.166572 2512 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hn64s\" (UniqueName: \"kubernetes.io/projected/306175a9-b679-4494-a474-96d766f9c018-kube-api-access-hn64s\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166574 kubelet[2512]: I0702 00:30:15.166584 2512 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166574 kubelet[2512]: I0702 00:30:15.166593 2512 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166844 kubelet[2512]: I0702 00:30:15.166605 2512 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9lm2f\" (UniqueName: \"kubernetes.io/projected/ce9a8c83-d186-4579-b3f7-034bbcbbe538-kube-api-access-9lm2f\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166844 kubelet[2512]: I0702 00:30:15.166614 2512 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166844 kubelet[2512]: I0702 00:30:15.166624 2512 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce9a8c83-d186-4579-b3f7-034bbcbbe538-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166844 kubelet[2512]: I0702 00:30:15.166633 2512 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/306175a9-b679-4494-a474-96d766f9c018-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166844 kubelet[2512]: I0702 00:30:15.166644 2512 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/306175a9-b679-4494-a474-96d766f9c018-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166844 kubelet[2512]: I0702 00:30:15.166652 2512 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.166844 kubelet[2512]: I0702 00:30:15.166661 2512 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/306175a9-b679-4494-a474-96d766f9c018-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:15.635652 kubelet[2512]: I0702 00:30:15.635612 2512 scope.go:117] "RemoveContainer" containerID="1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c" Jul 2 00:30:15.637811 containerd[1454]: time="2024-07-02T00:30:15.637608449Z" level=info msg="RemoveContainer for \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\"" Jul 2 00:30:15.642861 systemd[1]: Removed slice kubepods-besteffort-podce9a8c83_d186_4579_b3f7_034bbcbbe538.slice - libcontainer container kubepods-besteffort-podce9a8c83_d186_4579_b3f7_034bbcbbe538.slice. Jul 2 00:30:15.645198 containerd[1454]: time="2024-07-02T00:30:15.645105243Z" level=info msg="RemoveContainer for \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\" returns successfully" Jul 2 00:30:15.645551 kubelet[2512]: I0702 00:30:15.645433 2512 scope.go:117] "RemoveContainer" containerID="1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c" Jul 2 00:30:15.645965 containerd[1454]: time="2024-07-02T00:30:15.645923077Z" level=error msg="ContainerStatus for \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\": not found" Jul 2 00:30:15.647018 systemd[1]: Removed slice kubepods-burstable-pod306175a9_b679_4494_a474_96d766f9c018.slice - libcontainer container kubepods-burstable-pod306175a9_b679_4494_a474_96d766f9c018.slice. Jul 2 00:30:15.647546 systemd[1]: kubepods-burstable-pod306175a9_b679_4494_a474_96d766f9c018.slice: Consumed 6.769s CPU time. Jul 2 00:30:15.654221 kubelet[2512]: E0702 00:30:15.654180 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\": not found" containerID="1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c" Jul 2 00:30:15.654335 kubelet[2512]: I0702 00:30:15.654287 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c"} err="failed to get container status \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c995632b04eda5a395bba379032fd941361936159149bde77286510359c745c\": not found" Jul 2 00:30:15.654335 kubelet[2512]: I0702 00:30:15.654303 2512 scope.go:117] "RemoveContainer" containerID="7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f" Jul 2 00:30:15.655602 containerd[1454]: time="2024-07-02T00:30:15.655548527Z" level=info msg="RemoveContainer for \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\"" Jul 2 00:30:15.660023 containerd[1454]: time="2024-07-02T00:30:15.659982472Z" level=info msg="RemoveContainer for \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\" returns successfully" Jul 2 00:30:15.660205 kubelet[2512]: I0702 00:30:15.660178 2512 scope.go:117] "RemoveContainer" containerID="04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a" Jul 2 00:30:15.661392 containerd[1454]: time="2024-07-02T00:30:15.661350862Z" level=info msg="RemoveContainer for \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\"" Jul 2 00:30:15.664698 containerd[1454]: time="2024-07-02T00:30:15.664659468Z" level=info msg="RemoveContainer for \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\" returns successfully" Jul 2 00:30:15.664841 kubelet[2512]: I0702 00:30:15.664811 2512 scope.go:117] "RemoveContainer" containerID="9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c" Jul 2 00:30:15.665993 containerd[1454]: time="2024-07-02T00:30:15.665947826Z" level=info msg="RemoveContainer for \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\"" Jul 2 00:30:15.669369 containerd[1454]: time="2024-07-02T00:30:15.669343489Z" level=info msg="RemoveContainer for \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\" returns successfully" Jul 2 00:30:15.669531 kubelet[2512]: I0702 00:30:15.669512 2512 scope.go:117] "RemoveContainer" containerID="4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f" Jul 2 00:30:15.670521 containerd[1454]: time="2024-07-02T00:30:15.670467054Z" level=info msg="RemoveContainer for \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\"" Jul 2 00:30:15.673867 containerd[1454]: time="2024-07-02T00:30:15.673833871Z" level=info msg="RemoveContainer for \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\" returns successfully" Jul 2 00:30:15.674019 kubelet[2512]: I0702 00:30:15.673994 2512 scope.go:117] "RemoveContainer" containerID="6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0" Jul 2 00:30:15.674952 containerd[1454]: time="2024-07-02T00:30:15.674918623Z" level=info msg="RemoveContainer for \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\"" Jul 2 00:30:15.678086 containerd[1454]: time="2024-07-02T00:30:15.678047388Z" level=info msg="RemoveContainer for \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\" returns successfully" Jul 2 00:30:15.678295 kubelet[2512]: I0702 00:30:15.678218 2512 scope.go:117] "RemoveContainer" containerID="7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f" Jul 2 00:30:15.678419 containerd[1454]: time="2024-07-02T00:30:15.678390008Z" level=error msg="ContainerStatus for \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\": not found" Jul 2 00:30:15.678554 kubelet[2512]: E0702 00:30:15.678536 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\": not found" containerID="7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f" Jul 2 00:30:15.678666 kubelet[2512]: I0702 00:30:15.678574 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f"} err="failed to get container status \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d287fbbce2b4281522404430c77c1fd67d1357119ca247153f0d846e44a9a4f\": not found" Jul 2 00:30:15.678666 kubelet[2512]: I0702 00:30:15.678584 2512 scope.go:117] "RemoveContainer" containerID="04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a" Jul 2 00:30:15.678775 containerd[1454]: time="2024-07-02T00:30:15.678743500Z" level=error msg="ContainerStatus for \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\": not found" Jul 2 00:30:15.678913 kubelet[2512]: E0702 00:30:15.678892 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\": not found" containerID="04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a" Jul 2 00:30:15.678942 kubelet[2512]: I0702 00:30:15.678929 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a"} err="failed to get container status \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\": rpc error: code = NotFound desc = an error occurred when try to find container \"04867993a6dc9ee1281b5e3777a88f6bdd7556213130a0349a8e5655cb17381a\": not found" Jul 2 00:30:15.678942 kubelet[2512]: I0702 00:30:15.678940 2512 scope.go:117] "RemoveContainer" containerID="9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c" Jul 2 00:30:15.679117 containerd[1454]: time="2024-07-02T00:30:15.679066805Z" level=error msg="ContainerStatus for \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\": not found" Jul 2 00:30:15.679272 kubelet[2512]: E0702 00:30:15.679237 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\": not found" containerID="9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c" Jul 2 00:30:15.679324 kubelet[2512]: I0702 00:30:15.679293 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c"} err="failed to get container status \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9856683e6bdfa3230b1a0e8a0c4113f28ccad99e849f65df2da9555e64619d6c\": not found" Jul 2 00:30:15.679324 kubelet[2512]: I0702 00:30:15.679312 2512 scope.go:117] "RemoveContainer" containerID="4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f" Jul 2 00:30:15.679539 containerd[1454]: time="2024-07-02T00:30:15.679504837Z" level=error msg="ContainerStatus for \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\": not found" Jul 2 00:30:15.679654 kubelet[2512]: E0702 00:30:15.679637 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\": not found" containerID="4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f" Jul 2 00:30:15.679699 kubelet[2512]: I0702 00:30:15.679672 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f"} err="failed to get container status \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d3edf7885fda8294220276dac2cdcae820a4eedfc3b3b03a849523b5468c80f\": not found" Jul 2 00:30:15.679699 kubelet[2512]: I0702 00:30:15.679683 2512 scope.go:117] "RemoveContainer" containerID="6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0" Jul 2 00:30:15.679890 containerd[1454]: time="2024-07-02T00:30:15.679855584Z" level=error msg="ContainerStatus for \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\": not found" Jul 2 00:30:15.680007 kubelet[2512]: E0702 00:30:15.679979 2512 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\": not found" containerID="6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0" Jul 2 00:30:15.680007 kubelet[2512]: I0702 00:30:15.680007 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0"} err="failed to get container status \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ed2ba8c0d35b98a9f8166c2bf00e98fc1e91b9809e3db491ec9b53cd90527a0\": not found" Jul 2 00:30:15.825222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dad5f2899b9c7c8dbdc4560d7cf0dc4871d58245b2b27fb60fd267c47d3262b-rootfs.mount: Deactivated successfully. Jul 2 00:30:15.825335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0df9d06b1f029f1213216ef3c46d9fcbf14c71adc85ec434cc3782d5a6380513-rootfs.mount: Deactivated successfully. Jul 2 00:30:15.825412 systemd[1]: var-lib-kubelet-pods-306175a9\x2db679\x2d4494\x2da474\x2d96d766f9c018-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhn64s.mount: Deactivated successfully. Jul 2 00:30:15.825502 systemd[1]: var-lib-kubelet-pods-ce9a8c83\x2dd186\x2d4579\x2db3f7\x2d034bbcbbe538-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9lm2f.mount: Deactivated successfully. Jul 2 00:30:15.825583 systemd[1]: var-lib-kubelet-pods-306175a9\x2db679\x2d4494\x2da474\x2d96d766f9c018-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:30:15.825657 systemd[1]: var-lib-kubelet-pods-306175a9\x2db679\x2d4494\x2da474\x2d96d766f9c018-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:30:16.223329 kubelet[2512]: I0702 00:30:16.223283 2512 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="306175a9-b679-4494-a474-96d766f9c018" path="/var/lib/kubelet/pods/306175a9-b679-4494-a474-96d766f9c018/volumes" Jul 2 00:30:16.224152 kubelet[2512]: I0702 00:30:16.224126 2512 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ce9a8c83-d186-4579-b3f7-034bbcbbe538" path="/var/lib/kubelet/pods/ce9a8c83-d186-4579-b3f7-034bbcbbe538/volumes" Jul 2 00:30:16.793130 sshd[4153]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:16.803386 systemd[1]: sshd@23-10.0.0.153:22-10.0.0.1:54402.service: Deactivated successfully. Jul 2 00:30:16.805355 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:30:16.806760 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:30:16.819790 systemd[1]: Started sshd@24-10.0.0.153:22-10.0.0.1:54410.service - OpenSSH per-connection server daemon (10.0.0.1:54410). Jul 2 00:30:16.820857 systemd-logind[1436]: Removed session 24. Jul 2 00:30:16.855232 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 54410 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:30:16.856867 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:16.861258 systemd-logind[1436]: New session 25 of user core. Jul 2 00:30:16.871629 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:30:17.400845 sshd[4319]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:17.411437 systemd[1]: sshd@24-10.0.0.153:22-10.0.0.1:54410.service: Deactivated successfully. Jul 2 00:30:17.416412 kubelet[2512]: I0702 00:30:17.412992 2512 topology_manager.go:215] "Topology Admit Handler" podUID="3b2aea2b-5255-45e7-bf18-39052fbd95b6" podNamespace="kube-system" podName="cilium-4bf2q" Jul 2 00:30:17.416412 kubelet[2512]: E0702 00:30:17.413843 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="306175a9-b679-4494-a474-96d766f9c018" containerName="apply-sysctl-overwrites" Jul 2 00:30:17.416412 kubelet[2512]: E0702 00:30:17.413901 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce9a8c83-d186-4579-b3f7-034bbcbbe538" containerName="cilium-operator" Jul 2 00:30:17.416412 kubelet[2512]: E0702 00:30:17.413913 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="306175a9-b679-4494-a474-96d766f9c018" containerName="clean-cilium-state" Jul 2 00:30:17.416412 kubelet[2512]: E0702 00:30:17.413922 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="306175a9-b679-4494-a474-96d766f9c018" containerName="cilium-agent" Jul 2 00:30:17.416412 kubelet[2512]: E0702 00:30:17.413933 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="306175a9-b679-4494-a474-96d766f9c018" containerName="mount-cgroup" Jul 2 00:30:17.416412 kubelet[2512]: E0702 00:30:17.413941 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="306175a9-b679-4494-a474-96d766f9c018" containerName="mount-bpf-fs" Jul 2 00:30:17.416412 kubelet[2512]: I0702 00:30:17.413997 2512 memory_manager.go:346] "RemoveStaleState removing state" podUID="ce9a8c83-d186-4579-b3f7-034bbcbbe538" containerName="cilium-operator" Jul 2 00:30:17.416412 kubelet[2512]: I0702 00:30:17.414006 2512 memory_manager.go:346] "RemoveStaleState removing state" podUID="306175a9-b679-4494-a474-96d766f9c018" containerName="cilium-agent" Jul 2 00:30:17.416217 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:30:17.420161 systemd-logind[1436]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:30:17.434241 systemd[1]: Started sshd@25-10.0.0.153:22-10.0.0.1:54412.service - OpenSSH per-connection server daemon (10.0.0.1:54412). Jul 2 00:30:17.437239 systemd-logind[1436]: Removed session 25. Jul 2 00:30:17.441893 systemd[1]: Created slice kubepods-burstable-pod3b2aea2b_5255_45e7_bf18_39052fbd95b6.slice - libcontainer container kubepods-burstable-pod3b2aea2b_5255_45e7_bf18_39052fbd95b6.slice. Jul 2 00:30:17.470915 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 54412 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:30:17.472372 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:17.476269 systemd-logind[1436]: New session 26 of user core. Jul 2 00:30:17.479179 kubelet[2512]: I0702 00:30:17.479150 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-cilium-run\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479261 kubelet[2512]: I0702 00:30:17.479194 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-hostproc\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479261 kubelet[2512]: I0702 00:30:17.479215 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-cilium-cgroup\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479261 kubelet[2512]: I0702 00:30:17.479244 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-cni-path\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479364 kubelet[2512]: I0702 00:30:17.479328 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b2aea2b-5255-45e7-bf18-39052fbd95b6-cilium-config-path\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479396 kubelet[2512]: I0702 00:30:17.479387 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b2aea2b-5255-45e7-bf18-39052fbd95b6-hubble-tls\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479427 kubelet[2512]: I0702 00:30:17.479418 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59qv\" (UniqueName: \"kubernetes.io/projected/3b2aea2b-5255-45e7-bf18-39052fbd95b6-kube-api-access-p59qv\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479486 kubelet[2512]: I0702 00:30:17.479450 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b2aea2b-5255-45e7-bf18-39052fbd95b6-clustermesh-secrets\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479529 kubelet[2512]: I0702 00:30:17.479517 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-xtables-lock\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479561 kubelet[2512]: I0702 00:30:17.479549 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b2aea2b-5255-45e7-bf18-39052fbd95b6-cilium-ipsec-secrets\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479638 kubelet[2512]: I0702 00:30:17.479611 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-host-proc-sys-kernel\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479674 kubelet[2512]: I0702 00:30:17.479655 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-etc-cni-netd\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479674 kubelet[2512]: I0702 00:30:17.479673 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-lib-modules\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479752 kubelet[2512]: I0702 00:30:17.479704 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-bpf-maps\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.479752 kubelet[2512]: I0702 00:30:17.479728 2512 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b2aea2b-5255-45e7-bf18-39052fbd95b6-host-proc-sys-net\") pod \"cilium-4bf2q\" (UID: \"3b2aea2b-5255-45e7-bf18-39052fbd95b6\") " pod="kube-system/cilium-4bf2q" Jul 2 00:30:17.480612 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:30:17.532005 sshd[4332]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:17.539222 systemd[1]: sshd@25-10.0.0.153:22-10.0.0.1:54412.service: Deactivated successfully. Jul 2 00:30:17.541354 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:30:17.543119 systemd-logind[1436]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:30:17.550998 systemd[1]: Started sshd@26-10.0.0.153:22-10.0.0.1:54418.service - OpenSSH per-connection server daemon (10.0.0.1:54418). Jul 2 00:30:17.552020 systemd-logind[1436]: Removed session 26. Jul 2 00:30:17.584080 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 54418 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:30:17.585678 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:17.598851 systemd-logind[1436]: New session 27 of user core. Jul 2 00:30:17.611595 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:30:17.746174 kubelet[2512]: E0702 00:30:17.746141 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:17.746773 containerd[1454]: time="2024-07-02T00:30:17.746710879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4bf2q,Uid:3b2aea2b-5255-45e7-bf18-39052fbd95b6,Namespace:kube-system,Attempt:0,}" Jul 2 00:30:17.767625 containerd[1454]: time="2024-07-02T00:30:17.767532409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:30:17.767625 containerd[1454]: time="2024-07-02T00:30:17.767582926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:30:17.767625 containerd[1454]: time="2024-07-02T00:30:17.767597583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:30:17.767625 containerd[1454]: time="2024-07-02T00:30:17.767608043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:30:17.788616 systemd[1]: Started cri-containerd-e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841.scope - libcontainer container e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841. Jul 2 00:30:17.813410 containerd[1454]: time="2024-07-02T00:30:17.813363683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4bf2q,Uid:3b2aea2b-5255-45e7-bf18-39052fbd95b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\"" Jul 2 00:30:17.814235 kubelet[2512]: E0702 00:30:17.814211 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:17.816017 containerd[1454]: time="2024-07-02T00:30:17.815975112Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:30:17.841571 containerd[1454]: time="2024-07-02T00:30:17.841509421Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7cb8c1727c7fa76eee9ee0065d8895ae1e5dd20ec7dd3085c75a88669bcc4e9e\"" Jul 2 00:30:17.842008 containerd[1454]: time="2024-07-02T00:30:17.841982509Z" level=info msg="StartContainer for \"7cb8c1727c7fa76eee9ee0065d8895ae1e5dd20ec7dd3085c75a88669bcc4e9e\"" Jul 2 00:30:17.869608 systemd[1]: Started cri-containerd-7cb8c1727c7fa76eee9ee0065d8895ae1e5dd20ec7dd3085c75a88669bcc4e9e.scope - libcontainer container 7cb8c1727c7fa76eee9ee0065d8895ae1e5dd20ec7dd3085c75a88669bcc4e9e. Jul 2 00:30:17.893708 containerd[1454]: time="2024-07-02T00:30:17.893665615Z" level=info msg="StartContainer for \"7cb8c1727c7fa76eee9ee0065d8895ae1e5dd20ec7dd3085c75a88669bcc4e9e\" returns successfully" Jul 2 00:30:17.904330 systemd[1]: cri-containerd-7cb8c1727c7fa76eee9ee0065d8895ae1e5dd20ec7dd3085c75a88669bcc4e9e.scope: Deactivated successfully. Jul 2 00:30:17.944326 containerd[1454]: time="2024-07-02T00:30:17.944242811Z" level=info msg="shim disconnected" id=7cb8c1727c7fa76eee9ee0065d8895ae1e5dd20ec7dd3085c75a88669bcc4e9e namespace=k8s.io Jul 2 00:30:17.944326 containerd[1454]: time="2024-07-02T00:30:17.944311501Z" level=warning msg="cleaning up after shim disconnected" id=7cb8c1727c7fa76eee9ee0065d8895ae1e5dd20ec7dd3085c75a88669bcc4e9e namespace=k8s.io Jul 2 00:30:17.944326 containerd[1454]: time="2024-07-02T00:30:17.944322512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:18.646778 kubelet[2512]: E0702 00:30:18.646752 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:18.649130 containerd[1454]: time="2024-07-02T00:30:18.649071577Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:30:18.750952 containerd[1454]: time="2024-07-02T00:30:18.750906166Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7\"" Jul 2 00:30:18.751448 containerd[1454]: time="2024-07-02T00:30:18.751404131Z" level=info msg="StartContainer for \"b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7\"" Jul 2 00:30:18.786605 systemd[1]: Started cri-containerd-b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7.scope - libcontainer container b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7. Jul 2 00:30:18.816015 systemd[1]: cri-containerd-b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7.scope: Deactivated successfully. Jul 2 00:30:18.839616 containerd[1454]: time="2024-07-02T00:30:18.839571432Z" level=info msg="StartContainer for \"b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7\" returns successfully" Jul 2 00:30:18.923932 containerd[1454]: time="2024-07-02T00:30:18.923815226Z" level=info msg="shim disconnected" id=b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7 namespace=k8s.io Jul 2 00:30:18.923932 containerd[1454]: time="2024-07-02T00:30:18.923877113Z" level=warning msg="cleaning up after shim disconnected" id=b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7 namespace=k8s.io Jul 2 00:30:18.923932 containerd[1454]: time="2024-07-02T00:30:18.923889356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:19.280178 kubelet[2512]: E0702 00:30:19.280146 2512 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:30:19.585062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0db6bb5c43b78e1c4aa19b622f8dded427e4afd4315c4aac078f73eed06a6d7-rootfs.mount: Deactivated successfully. Jul 2 00:30:19.649525 kubelet[2512]: E0702 00:30:19.649498 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:19.651039 containerd[1454]: time="2024-07-02T00:30:19.650994295Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:30:19.829421 containerd[1454]: time="2024-07-02T00:30:19.829350263Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6\"" Jul 2 00:30:19.829879 containerd[1454]: time="2024-07-02T00:30:19.829849900Z" level=info msg="StartContainer for \"130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6\"" Jul 2 00:30:19.859606 systemd[1]: Started cri-containerd-130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6.scope - libcontainer container 130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6. Jul 2 00:30:19.887979 systemd[1]: cri-containerd-130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6.scope: Deactivated successfully. Jul 2 00:30:19.895485 containerd[1454]: time="2024-07-02T00:30:19.895422784Z" level=info msg="StartContainer for \"130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6\" returns successfully" Jul 2 00:30:19.945923 containerd[1454]: time="2024-07-02T00:30:19.945855462Z" level=info msg="shim disconnected" id=130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6 namespace=k8s.io Jul 2 00:30:19.945923 containerd[1454]: time="2024-07-02T00:30:19.945917860Z" level=warning msg="cleaning up after shim disconnected" id=130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6 namespace=k8s.io Jul 2 00:30:19.946146 containerd[1454]: time="2024-07-02T00:30:19.945929322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:20.585387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-130ebd27f3ec5dff30eab3754798f11b8a236ba9f94a5f10e3b9e48b208743c6-rootfs.mount: Deactivated successfully. Jul 2 00:30:20.653624 kubelet[2512]: E0702 00:30:20.653580 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:20.655370 containerd[1454]: time="2024-07-02T00:30:20.655305572Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:30:20.967416 containerd[1454]: time="2024-07-02T00:30:20.967364152Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83\"" Jul 2 00:30:20.968102 containerd[1454]: time="2024-07-02T00:30:20.967987764Z" level=info msg="StartContainer for \"9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83\"" Jul 2 00:30:20.998681 systemd[1]: Started cri-containerd-9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83.scope - libcontainer container 9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83. Jul 2 00:30:21.020920 systemd[1]: cri-containerd-9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83.scope: Deactivated successfully. Jul 2 00:30:21.049758 containerd[1454]: time="2024-07-02T00:30:21.049715343Z" level=info msg="StartContainer for \"9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83\" returns successfully" Jul 2 00:30:21.104571 containerd[1454]: time="2024-07-02T00:30:21.104502401Z" level=info msg="shim disconnected" id=9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83 namespace=k8s.io Jul 2 00:30:21.104571 containerd[1454]: time="2024-07-02T00:30:21.104563387Z" level=warning msg="cleaning up after shim disconnected" id=9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83 namespace=k8s.io Jul 2 00:30:21.104571 containerd[1454]: time="2024-07-02T00:30:21.104573666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:21.585152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9101ecc9fa3a789fecb7d4685b3607aa719080c0f8b2b19dd45419b20e1d7b83-rootfs.mount: Deactivated successfully. Jul 2 00:30:21.657656 kubelet[2512]: E0702 00:30:21.657433 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:21.662382 containerd[1454]: time="2024-07-02T00:30:21.662332142Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:30:21.676867 containerd[1454]: time="2024-07-02T00:30:21.676823322Z" level=info msg="CreateContainer within sandbox \"e7a2a2507971e5f87b12e04c4684e90ae32e8a327e4fb4c8d73fda9274e79841\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"000cdeafa623b77946bae590ed314b23567f59535b25e6766b08018a9ff4664d\"" Jul 2 00:30:21.677394 containerd[1454]: time="2024-07-02T00:30:21.677334811Z" level=info msg="StartContainer for \"000cdeafa623b77946bae590ed314b23567f59535b25e6766b08018a9ff4664d\"" Jul 2 00:30:21.708607 systemd[1]: Started cri-containerd-000cdeafa623b77946bae590ed314b23567f59535b25e6766b08018a9ff4664d.scope - libcontainer container 000cdeafa623b77946bae590ed314b23567f59535b25e6766b08018a9ff4664d. Jul 2 00:30:21.757976 containerd[1454]: time="2024-07-02T00:30:21.757907533Z" level=info msg="StartContainer for \"000cdeafa623b77946bae590ed314b23567f59535b25e6766b08018a9ff4664d\" returns successfully" Jul 2 00:30:22.122536 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 00:30:22.153499 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Jul 2 00:30:22.178494 kernel: DRBG: Continuing without Jitter RNG Jul 2 00:30:22.661588 kubelet[2512]: E0702 00:30:22.661568 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:23.747499 kubelet[2512]: E0702 00:30:23.747451 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:24.221648 kubelet[2512]: E0702 00:30:24.221625 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:25.039451 systemd-networkd[1370]: lxc_health: Link UP Jul 2 00:30:25.047770 systemd-networkd[1370]: lxc_health: Gained carrier Jul 2 00:30:25.750878 kubelet[2512]: E0702 00:30:25.749687 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:25.764645 kubelet[2512]: I0702 00:30:25.764286 2512 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4bf2q" podStartSLOduration=8.764252057 podCreationTimestamp="2024-07-02 00:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:30:22.671804788 +0000 UTC m=+88.536274345" watchObservedRunningTime="2024-07-02 00:30:25.764252057 +0000 UTC m=+91.628721614" Jul 2 00:30:25.964842 systemd[1]: run-containerd-runc-k8s.io-000cdeafa623b77946bae590ed314b23567f59535b25e6766b08018a9ff4664d-runc.JqTkNF.mount: Deactivated successfully. Jul 2 00:30:26.659649 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jul 2 00:30:26.668682 kubelet[2512]: E0702 00:30:26.668665 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:27.670494 kubelet[2512]: E0702 00:30:27.670448 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:28.061246 systemd[1]: run-containerd-runc-k8s.io-000cdeafa623b77946bae590ed314b23567f59535b25e6766b08018a9ff4664d-runc.E67bpx.mount: Deactivated successfully. Jul 2 00:30:30.205556 sshd[4340]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:30.210077 systemd[1]: sshd@26-10.0.0.153:22-10.0.0.1:54418.service: Deactivated successfully. Jul 2 00:30:30.212176 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:30:30.212913 systemd-logind[1436]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:30:30.214038 systemd-logind[1436]: Removed session 27.