Jul 2 06:54:06.049043 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 06:54:06.049095 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 06:54:06.049111 kernel: BIOS-provided physical RAM map: Jul 2 06:54:06.049127 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 06:54:06.049137 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 06:54:06.049147 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 06:54:06.049159 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 2 06:54:06.049170 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 2 06:54:06.049180 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 2 06:54:06.049190 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 2 06:54:06.049201 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 06:54:06.049211 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 06:54:06.049227 kernel: NX (Execute Disable) protection: active Jul 2 06:54:06.049237 kernel: APIC: Static calls initialized Jul 2 06:54:06.049250 kernel: SMBIOS 2.8 present. Jul 2 06:54:06.049262 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 2 06:54:06.049273 kernel: Hypervisor detected: KVM Jul 2 06:54:06.049289 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 06:54:06.049301 kernel: kvm-clock: using sched offset of 5079862002 cycles Jul 2 06:54:06.049313 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 06:54:06.049325 kernel: tsc: Detected 2499.998 MHz processor Jul 2 06:54:06.049337 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 06:54:06.049348 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 06:54:06.049360 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 2 06:54:06.049371 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 06:54:06.049383 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 06:54:06.049399 kernel: Using GB pages for direct mapping Jul 2 06:54:06.049410 kernel: ACPI: Early table checksum verification disabled Jul 2 06:54:06.049783 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 2 06:54:06.049803 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:54:06.049815 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:54:06.049826 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:54:06.049838 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 2 06:54:06.049849 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:54:06.049861 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:54:06.049880 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:54:06.049891 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:54:06.049903 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 2 06:54:06.049914 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 2 06:54:06.049926 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 2 06:54:06.049943 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 2 06:54:06.049955 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 2 06:54:06.049972 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 2 06:54:06.049984 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 2 06:54:06.049996 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 06:54:06.050008 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 06:54:06.050020 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 2 06:54:06.050032 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jul 2 06:54:06.050044 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 2 06:54:06.050056 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jul 2 06:54:06.050072 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 2 06:54:06.050084 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jul 2 06:54:06.050096 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 2 06:54:06.050108 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jul 2 06:54:06.050119 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 2 06:54:06.050131 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jul 2 06:54:06.050143 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 2 06:54:06.050155 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jul 2 06:54:06.050166 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 2 06:54:06.050183 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jul 2 06:54:06.050195 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 06:54:06.050207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 06:54:06.050219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 2 06:54:06.050231 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jul 2 06:54:06.050243 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jul 2 06:54:06.050256 kernel: Zone ranges: Jul 2 06:54:06.050268 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 06:54:06.050280 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 2 06:54:06.050296 kernel: Normal empty Jul 2 06:54:06.050309 kernel: Movable zone start for each node Jul 2 06:54:06.050321 kernel: Early memory node ranges Jul 2 06:54:06.050332 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 06:54:06.050344 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 2 06:54:06.050356 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 2 06:54:06.050368 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 06:54:06.050380 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 06:54:06.050392 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 2 06:54:06.050404 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 06:54:06.050433 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 06:54:06.050448 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 06:54:06.050460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 06:54:06.050472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 06:54:06.050485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 06:54:06.050497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 06:54:06.050509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 06:54:06.050521 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 06:54:06.050533 kernel: TSC deadline timer available Jul 2 06:54:06.050564 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jul 2 06:54:06.050576 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 06:54:06.050588 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 2 06:54:06.050600 kernel: Booting paravirtualized kernel on KVM Jul 2 06:54:06.050612 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 06:54:06.050625 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 2 06:54:06.050637 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Jul 2 06:54:06.050649 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Jul 2 06:54:06.050661 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 2 06:54:06.050678 kernel: kvm-guest: PV spinlocks enabled Jul 2 06:54:06.050690 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 06:54:06.050703 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 06:54:06.050716 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 06:54:06.050728 kernel: random: crng init done Jul 2 06:54:06.050740 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 06:54:06.050752 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 06:54:06.050764 kernel: Fallback order for Node 0: 0 Jul 2 06:54:06.050781 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jul 2 06:54:06.050793 kernel: Policy zone: DMA32 Jul 2 06:54:06.050805 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 06:54:06.050817 kernel: software IO TLB: area num 16. Jul 2 06:54:06.050829 kernel: Memory: 1895384K/2096616K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 200972K reserved, 0K cma-reserved) Jul 2 06:54:06.050841 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 2 06:54:06.050853 kernel: Kernel/User page tables isolation: enabled Jul 2 06:54:06.050865 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 06:54:06.050877 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 06:54:06.050894 kernel: Dynamic Preempt: voluntary Jul 2 06:54:06.050906 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 06:54:06.050919 kernel: rcu: RCU event tracing is enabled. Jul 2 06:54:06.050931 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 2 06:54:06.050943 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 06:54:06.050967 kernel: Rude variant of Tasks RCU enabled. Jul 2 06:54:06.050984 kernel: Tracing variant of Tasks RCU enabled. Jul 2 06:54:06.050997 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 06:54:06.051009 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 2 06:54:06.051022 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 2 06:54:06.051034 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 06:54:06.051051 kernel: Console: colour VGA+ 80x25 Jul 2 06:54:06.051064 kernel: printk: console [tty0] enabled Jul 2 06:54:06.051077 kernel: printk: console [ttyS0] enabled Jul 2 06:54:06.051089 kernel: ACPI: Core revision 20230628 Jul 2 06:54:06.051102 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 06:54:06.051115 kernel: x2apic enabled Jul 2 06:54:06.053484 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 06:54:06.053500 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 2 06:54:06.053513 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jul 2 06:54:06.053526 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 06:54:06.053551 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 06:54:06.053566 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 06:54:06.053578 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 06:54:06.053591 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 06:54:06.053603 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 06:54:06.053623 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 06:54:06.053636 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 06:54:06.053648 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 06:54:06.053661 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 06:54:06.053673 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 06:54:06.053686 kernel: MMIO Stale Data: Unknown: No mitigations Jul 2 06:54:06.053698 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 2 06:54:06.053711 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 06:54:06.053724 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 06:54:06.053736 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 06:54:06.053749 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 06:54:06.053767 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 06:54:06.053780 kernel: Freeing SMP alternatives memory: 32K Jul 2 06:54:06.053792 kernel: pid_max: default: 32768 minimum: 301 Jul 2 06:54:06.053805 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 06:54:06.053817 kernel: SELinux: Initializing. Jul 2 06:54:06.053830 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 06:54:06.053843 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 06:54:06.053856 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jul 2 06:54:06.053868 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Jul 2 06:54:06.053881 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Jul 2 06:54:06.053894 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Jul 2 06:54:06.053912 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jul 2 06:54:06.053925 kernel: signal: max sigframe size: 1776 Jul 2 06:54:06.053938 kernel: rcu: Hierarchical SRCU implementation. Jul 2 06:54:06.053951 kernel: rcu: Max phase no-delay instances is 400. Jul 2 06:54:06.053964 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 06:54:06.053976 kernel: smp: Bringing up secondary CPUs ... Jul 2 06:54:06.053989 kernel: smpboot: x86: Booting SMP configuration: Jul 2 06:54:06.054001 kernel: .... node #0, CPUs: #1 Jul 2 06:54:06.054014 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 2 06:54:06.054031 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 06:54:06.054044 kernel: smpboot: Max logical packages: 16 Jul 2 06:54:06.054057 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jul 2 06:54:06.054069 kernel: devtmpfs: initialized Jul 2 06:54:06.054082 kernel: x86/mm: Memory block size: 128MB Jul 2 06:54:06.054095 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 06:54:06.054107 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 2 06:54:06.054120 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 06:54:06.054133 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 06:54:06.054150 kernel: audit: initializing netlink subsys (disabled) Jul 2 06:54:06.054163 kernel: audit: type=2000 audit(1719903243.839:1): state=initialized audit_enabled=0 res=1 Jul 2 06:54:06.054176 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 06:54:06.054188 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 06:54:06.054201 kernel: cpuidle: using governor menu Jul 2 06:54:06.054214 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 06:54:06.054227 kernel: dca service started, version 1.12.1 Jul 2 06:54:06.054239 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 2 06:54:06.054252 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 2 06:54:06.054269 kernel: PCI: Using configuration type 1 for base access Jul 2 06:54:06.054282 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 06:54:06.054295 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 06:54:06.054308 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 06:54:06.054321 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 06:54:06.054333 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 06:54:06.054346 kernel: ACPI: Added _OSI(Module Device) Jul 2 06:54:06.054358 kernel: ACPI: Added _OSI(Processor Device) Jul 2 06:54:06.054371 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 06:54:06.054388 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 06:54:06.054401 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 06:54:06.054413 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 06:54:06.055490 kernel: ACPI: Interpreter enabled Jul 2 06:54:06.055507 kernel: ACPI: PM: (supports S0 S5) Jul 2 06:54:06.055520 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 06:54:06.055534 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 06:54:06.055559 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 06:54:06.055572 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 2 06:54:06.055593 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 06:54:06.055869 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 06:54:06.056054 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 06:54:06.056224 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 06:54:06.056243 kernel: PCI host bridge to bus 0000:00 Jul 2 06:54:06.057443 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 06:54:06.057636 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 06:54:06.057790 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 06:54:06.057961 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 2 06:54:06.058111 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 2 06:54:06.058262 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 2 06:54:06.058414 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 06:54:06.059675 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 2 06:54:06.059871 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jul 2 06:54:06.060042 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jul 2 06:54:06.060210 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jul 2 06:54:06.060378 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jul 2 06:54:06.061640 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 06:54:06.061837 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 2 06:54:06.062046 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jul 2 06:54:06.062233 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 2 06:54:06.062400 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jul 2 06:54:06.062693 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 2 06:54:06.062861 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jul 2 06:54:06.063035 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 2 06:54:06.063199 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jul 2 06:54:06.063381 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 2 06:54:06.063592 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jul 2 06:54:06.063768 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 2 06:54:06.063932 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jul 2 06:54:06.064108 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 2 06:54:06.064274 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jul 2 06:54:06.066524 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 2 06:54:06.066726 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jul 2 06:54:06.066912 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 06:54:06.067080 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 2 06:54:06.067246 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jul 2 06:54:06.067550 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 2 06:54:06.067758 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jul 2 06:54:06.067954 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 06:54:06.068121 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 06:54:06.068285 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jul 2 06:54:06.068482 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jul 2 06:54:06.068674 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 2 06:54:06.068886 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 2 06:54:06.069074 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 2 06:54:06.069238 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jul 2 06:54:06.069402 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jul 2 06:54:06.069630 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 2 06:54:06.069795 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 2 06:54:06.069974 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jul 2 06:54:06.070153 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jul 2 06:54:06.070329 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 2 06:54:06.072558 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 2 06:54:06.072741 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 06:54:06.072927 kernel: pci_bus 0000:02: extended config space not accessible Jul 2 06:54:06.073123 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jul 2 06:54:06.073306 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jul 2 06:54:06.073512 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 2 06:54:06.073705 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 2 06:54:06.073901 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 2 06:54:06.074078 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jul 2 06:54:06.074250 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 2 06:54:06.074417 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 2 06:54:06.074632 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 06:54:06.074836 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 2 06:54:06.075012 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 2 06:54:06.075183 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 2 06:54:06.075350 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 2 06:54:06.077613 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 06:54:06.077798 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 2 06:54:06.077971 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 2 06:54:06.078139 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 06:54:06.078316 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 2 06:54:06.080548 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 2 06:54:06.080730 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 06:54:06.080903 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 2 06:54:06.081068 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 2 06:54:06.081231 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 06:54:06.081398 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 2 06:54:06.081603 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 2 06:54:06.081780 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 06:54:06.081946 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 2 06:54:06.082109 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 2 06:54:06.082273 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 06:54:06.082293 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 06:54:06.082307 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 06:54:06.082320 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 06:54:06.082333 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 06:54:06.082353 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 2 06:54:06.082373 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 2 06:54:06.082386 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 2 06:54:06.082399 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 2 06:54:06.082412 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 2 06:54:06.084585 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 2 06:54:06.084604 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 2 06:54:06.084617 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 2 06:54:06.084630 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 2 06:54:06.084651 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 2 06:54:06.084664 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 2 06:54:06.084677 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 2 06:54:06.084690 kernel: iommu: Default domain type: Translated Jul 2 06:54:06.084704 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 06:54:06.084716 kernel: PCI: Using ACPI for IRQ routing Jul 2 06:54:06.084729 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 06:54:06.084742 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 06:54:06.084754 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 2 06:54:06.084938 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 2 06:54:06.085110 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 2 06:54:06.085275 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 06:54:06.085295 kernel: vgaarb: loaded Jul 2 06:54:06.085308 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 06:54:06.085321 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 06:54:06.085335 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 06:54:06.085348 kernel: pnp: PnP ACPI init Jul 2 06:54:06.087577 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 2 06:54:06.087601 kernel: pnp: PnP ACPI: found 5 devices Jul 2 06:54:06.087614 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 06:54:06.087627 kernel: NET: Registered PF_INET protocol family Jul 2 06:54:06.087640 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 06:54:06.087653 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 06:54:06.087666 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 06:54:06.087679 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 06:54:06.087700 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 06:54:06.087713 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 06:54:06.087726 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 06:54:06.087739 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 06:54:06.087752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 06:54:06.087765 kernel: NET: Registered PF_XDP protocol family Jul 2 06:54:06.087929 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 2 06:54:06.088097 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 2 06:54:06.088271 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 2 06:54:06.088461 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 2 06:54:06.088644 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 2 06:54:06.088810 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 2 06:54:06.088974 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 2 06:54:06.089138 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 2 06:54:06.089313 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 2 06:54:06.091521 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 2 06:54:06.091707 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 2 06:54:06.091873 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 2 06:54:06.092042 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 2 06:54:06.092213 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 2 06:54:06.092385 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 2 06:54:06.092588 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 2 06:54:06.092775 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 2 06:54:06.092978 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 2 06:54:06.093147 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 2 06:54:06.093315 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 2 06:54:06.093520 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 2 06:54:06.093701 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 06:54:06.093865 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 2 06:54:06.094030 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 2 06:54:06.094195 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 2 06:54:06.094371 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 06:54:06.096598 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 2 06:54:06.096768 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 2 06:54:06.096942 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 2 06:54:06.097112 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 06:54:06.097286 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 2 06:54:06.098490 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 2 06:54:06.098674 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 2 06:54:06.098840 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 06:54:06.099004 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 2 06:54:06.099168 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 2 06:54:06.099334 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 2 06:54:06.100556 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 06:54:06.100728 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 2 06:54:06.100891 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 2 06:54:06.101064 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 2 06:54:06.101231 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 06:54:06.101397 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 2 06:54:06.103616 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 2 06:54:06.103793 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 2 06:54:06.103958 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 06:54:06.104151 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 2 06:54:06.104315 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 2 06:54:06.104519 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 2 06:54:06.104699 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 06:54:06.104855 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 06:54:06.105010 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 06:54:06.105159 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 06:54:06.105334 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 2 06:54:06.105505 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 2 06:54:06.105669 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 2 06:54:06.105839 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 2 06:54:06.105997 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 2 06:54:06.106154 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 06:54:06.106323 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 2 06:54:06.108549 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 2 06:54:06.108715 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 2 06:54:06.108877 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 06:54:06.109046 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 2 06:54:06.109205 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 2 06:54:06.109361 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 06:54:06.109570 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 2 06:54:06.109741 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 2 06:54:06.109909 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 06:54:06.110093 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 2 06:54:06.110254 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 2 06:54:06.110414 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 06:54:06.112659 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 2 06:54:06.112831 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 2 06:54:06.112990 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 06:54:06.113170 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 2 06:54:06.113331 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 2 06:54:06.115550 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 06:54:06.115722 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 2 06:54:06.115879 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 2 06:54:06.116043 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 06:54:06.116064 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 2 06:54:06.116079 kernel: PCI: CLS 0 bytes, default 64 Jul 2 06:54:06.116092 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 06:54:06.116106 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 2 06:54:06.116120 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 06:54:06.116134 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 2 06:54:06.116147 kernel: Initialise system trusted keyrings Jul 2 06:54:06.116161 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 06:54:06.116182 kernel: Key type asymmetric registered Jul 2 06:54:06.116196 kernel: Asymmetric key parser 'x509' registered Jul 2 06:54:06.116209 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 06:54:06.116223 kernel: io scheduler mq-deadline registered Jul 2 06:54:06.116236 kernel: io scheduler kyber registered Jul 2 06:54:06.116250 kernel: io scheduler bfq registered Jul 2 06:54:06.116418 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 2 06:54:06.116626 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 2 06:54:06.116793 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 06:54:06.116972 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 2 06:54:06.117139 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 2 06:54:06.117318 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 06:54:06.117520 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 2 06:54:06.117701 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 2 06:54:06.117868 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 06:54:06.118044 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 2 06:54:06.118209 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 2 06:54:06.118374 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 06:54:06.118589 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 2 06:54:06.118758 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 2 06:54:06.118927 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 06:54:06.119105 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 2 06:54:06.119272 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 2 06:54:06.119458 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 06:54:06.119642 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 2 06:54:06.119808 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 2 06:54:06.119976 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 06:54:06.120154 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 2 06:54:06.120322 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 2 06:54:06.120526 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 06:54:06.120560 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 06:54:06.120576 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 2 06:54:06.120589 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 2 06:54:06.120611 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 06:54:06.120625 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 06:54:06.120638 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 06:54:06.120652 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 06:54:06.120665 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 06:54:06.120852 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 06:54:06.120876 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 06:54:06.121030 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 06:54:06.121204 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T06:54:05 UTC (1719903245) Jul 2 06:54:06.121362 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 06:54:06.121382 kernel: intel_pstate: CPU model not supported Jul 2 06:54:06.121396 kernel: NET: Registered PF_INET6 protocol family Jul 2 06:54:06.121409 kernel: Segment Routing with IPv6 Jul 2 06:54:06.121472 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 06:54:06.121488 kernel: NET: Registered PF_PACKET protocol family Jul 2 06:54:06.121502 kernel: Key type dns_resolver registered Jul 2 06:54:06.121515 kernel: IPI shorthand broadcast: enabled Jul 2 06:54:06.121547 kernel: sched_clock: Marking stable (1373004042, 236905660)->(1752896850, -142987148) Jul 2 06:54:06.121563 kernel: registered taskstats version 1 Jul 2 06:54:06.121577 kernel: Loading compiled-in X.509 certificates Jul 2 06:54:06.121590 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 06:54:06.121603 kernel: Key type .fscrypt registered Jul 2 06:54:06.121616 kernel: Key type fscrypt-provisioning registered Jul 2 06:54:06.121630 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 06:54:06.121643 kernel: ima: Allocated hash algorithm: sha1 Jul 2 06:54:06.121662 kernel: ima: No architecture policies found Jul 2 06:54:06.121676 kernel: clk: Disabling unused clocks Jul 2 06:54:06.121690 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 06:54:06.121703 kernel: Write protecting the kernel read-only data: 36864k Jul 2 06:54:06.121717 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 06:54:06.121730 kernel: Run /init as init process Jul 2 06:54:06.121743 kernel: with arguments: Jul 2 06:54:06.121757 kernel: /init Jul 2 06:54:06.121770 kernel: with environment: Jul 2 06:54:06.121783 kernel: HOME=/ Jul 2 06:54:06.121801 kernel: TERM=linux Jul 2 06:54:06.121814 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 06:54:06.121838 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 06:54:06.121857 systemd[1]: Detected virtualization kvm. Jul 2 06:54:06.121872 systemd[1]: Detected architecture x86-64. Jul 2 06:54:06.121886 systemd[1]: Running in initrd. Jul 2 06:54:06.121900 systemd[1]: No hostname configured, using default hostname. Jul 2 06:54:06.121920 systemd[1]: Hostname set to . Jul 2 06:54:06.121935 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:54:06.121949 systemd[1]: Queued start job for default target initrd.target. Jul 2 06:54:06.121964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 06:54:06.121979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:54:06.121994 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 06:54:06.122008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 06:54:06.122023 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 06:54:06.122042 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 06:54:06.122058 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 06:54:06.122073 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 06:54:06.122087 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 06:54:06.122102 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:54:06.122116 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:54:06.122130 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:54:06.122150 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:54:06.122164 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:54:06.122179 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:54:06.122193 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:54:06.122208 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 06:54:06.122222 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 06:54:06.122237 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:54:06.122251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:54:06.122265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:54:06.122284 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:54:06.122298 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 06:54:06.122313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:54:06.122327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 06:54:06.122341 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 06:54:06.122356 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:54:06.122375 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:54:06.122390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 06:54:06.122408 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 06:54:06.122492 systemd-journald[201]: Collecting audit messages is disabled. Jul 2 06:54:06.122526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:54:06.122554 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 06:54:06.122577 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 06:54:06.122592 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 06:54:06.122606 kernel: Bridge firewalling registered Jul 2 06:54:06.122621 systemd-journald[201]: Journal started Jul 2 06:54:06.122651 systemd-journald[201]: Runtime Journal (/run/log/journal/7b1e6d62ec284f73bafd4a50b2241e95) is 4.7M, max 38.0M, 33.2M free. Jul 2 06:54:06.065904 systemd-modules-load[202]: Inserted module 'overlay' Jul 2 06:54:06.115055 systemd-modules-load[202]: Inserted module 'br_netfilter' Jul 2 06:54:06.170962 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:54:06.172307 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:54:06.173330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 06:54:06.174893 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 06:54:06.183629 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 06:54:06.198159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:54:06.203608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 06:54:06.206140 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:54:06.219846 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:54:06.226865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:54:06.237682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 06:54:06.238875 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:54:06.242169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:54:06.251684 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:54:06.255694 dracut-cmdline[233]: dracut-dracut-053 Jul 2 06:54:06.258509 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 06:54:06.298358 systemd-resolved[239]: Positive Trust Anchors: Jul 2 06:54:06.298389 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:54:06.298499 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 06:54:06.302764 systemd-resolved[239]: Defaulting to hostname 'linux'. Jul 2 06:54:06.304628 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:54:06.307363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:54:06.385494 kernel: SCSI subsystem initialized Jul 2 06:54:06.398465 kernel: Loading iSCSI transport class v2.0-870. Jul 2 06:54:06.414460 kernel: iscsi: registered transport (tcp) Jul 2 06:54:06.444619 kernel: iscsi: registered transport (qla4xxx) Jul 2 06:54:06.444715 kernel: QLogic iSCSI HBA Driver Jul 2 06:54:06.500820 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 06:54:06.511874 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 06:54:06.547897 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 06:54:06.548121 kernel: device-mapper: uevent: version 1.0.3 Jul 2 06:54:06.550398 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 06:54:06.615508 kernel: raid6: sse2x4 gen() 7258 MB/s Jul 2 06:54:06.635612 kernel: raid6: sse2x2 gen() 5151 MB/s Jul 2 06:54:06.652209 kernel: raid6: sse2x1 gen() 5364 MB/s Jul 2 06:54:06.652330 kernel: raid6: using algorithm sse2x4 gen() 7258 MB/s Jul 2 06:54:06.671308 kernel: raid6: .... xor() 4888 MB/s, rmw enabled Jul 2 06:54:06.671461 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 06:54:06.704473 kernel: xor: automatically using best checksumming function avx Jul 2 06:54:06.936474 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 06:54:06.956770 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:54:06.966827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:54:06.995836 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jul 2 06:54:07.003366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:54:07.014718 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 06:54:07.037200 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jul 2 06:54:07.083934 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:54:07.089773 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:54:07.218700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:54:07.229034 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 06:54:07.264897 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 06:54:07.267279 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:54:07.270155 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:54:07.271762 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:54:07.278686 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 06:54:07.310233 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:54:07.367912 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 06:54:07.385454 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jul 2 06:54:07.432846 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 06:54:07.433062 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 06:54:07.433085 kernel: GPT:17805311 != 125829119 Jul 2 06:54:07.433118 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 06:54:07.433137 kernel: GPT:17805311 != 125829119 Jul 2 06:54:07.433154 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 06:54:07.433172 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:54:07.433190 kernel: AVX version of gcm_enc/dec engaged. Jul 2 06:54:07.433208 kernel: AES CTR mode by8 optimization enabled Jul 2 06:54:07.433226 kernel: ACPI: bus type USB registered Jul 2 06:54:07.433244 kernel: usbcore: registered new interface driver usbfs Jul 2 06:54:07.416074 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 06:54:07.446875 kernel: usbcore: registered new interface driver hub Jul 2 06:54:07.446919 kernel: usbcore: registered new device driver usb Jul 2 06:54:07.416277 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:54:07.444547 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 06:54:07.449535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 06:54:07.449797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 06:54:07.454249 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 06:54:07.472589 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (463) Jul 2 06:54:07.469866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 06:54:07.518949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 06:54:07.649847 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 2 06:54:07.650269 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 2 06:54:07.652575 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 2 06:54:07.652845 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 2 06:54:07.653072 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 2 06:54:07.653276 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 2 06:54:07.653502 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (476) Jul 2 06:54:07.653555 kernel: hub 1-0:1.0: USB hub found Jul 2 06:54:07.653812 kernel: hub 1-0:1.0: 4 ports detected Jul 2 06:54:07.654076 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 2 06:54:07.654381 kernel: hub 2-0:1.0: USB hub found Jul 2 06:54:07.654667 kernel: hub 2-0:1.0: 4 ports detected Jul 2 06:54:07.654888 kernel: libata version 3.00 loaded. Jul 2 06:54:07.654909 kernel: ahci 0000:00:1f.2: version 3.0 Jul 2 06:54:07.655125 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 2 06:54:07.655147 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 2 06:54:07.655389 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 2 06:54:07.655915 kernel: scsi host0: ahci Jul 2 06:54:07.656145 kernel: scsi host1: ahci Jul 2 06:54:07.656353 kernel: scsi host2: ahci Jul 2 06:54:07.656617 kernel: scsi host3: ahci Jul 2 06:54:07.656835 kernel: scsi host4: ahci Jul 2 06:54:07.657046 kernel: scsi host5: ahci Jul 2 06:54:07.657277 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jul 2 06:54:07.657300 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jul 2 06:54:07.657318 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jul 2 06:54:07.657336 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jul 2 06:54:07.657354 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jul 2 06:54:07.657371 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jul 2 06:54:07.657782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 06:54:07.668487 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 06:54:07.675937 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 06:54:07.676887 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 06:54:07.697080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 06:54:07.711767 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 06:54:07.716355 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 06:54:07.723756 disk-uuid[562]: Primary Header is updated. Jul 2 06:54:07.723756 disk-uuid[562]: Secondary Entries is updated. Jul 2 06:54:07.723756 disk-uuid[562]: Secondary Header is updated. Jul 2 06:54:07.731450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:54:07.738487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:54:07.765831 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:54:07.789477 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 2 06:54:07.898452 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 2 06:54:07.898545 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 2 06:54:07.900480 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 2 06:54:07.902868 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 2 06:54:07.905464 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 2 06:54:07.905501 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 2 06:54:07.933456 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 06:54:07.941399 kernel: usbcore: registered new interface driver usbhid Jul 2 06:54:07.941522 kernel: usbhid: USB HID core driver Jul 2 06:54:07.950240 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jul 2 06:54:07.950335 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 2 06:54:08.743488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:54:08.744780 disk-uuid[563]: The operation has completed successfully. Jul 2 06:54:08.797313 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 06:54:08.797524 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 06:54:08.822731 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 06:54:08.836984 sh[584]: Success Jul 2 06:54:08.856466 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jul 2 06:54:08.931112 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 06:54:08.940776 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 06:54:08.945599 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 06:54:08.983594 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 06:54:08.983713 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:54:08.986765 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 06:54:08.990544 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 06:54:08.990615 kernel: BTRFS info (device dm-0): using free space tree Jul 2 06:54:09.005367 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 06:54:09.007158 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 06:54:09.016766 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 06:54:09.020632 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 06:54:09.034913 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 06:54:09.034978 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:54:09.036865 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:54:09.043602 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 06:54:09.060223 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 06:54:09.064638 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 06:54:09.072179 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 06:54:09.081967 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 06:54:09.170721 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:54:09.179758 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:54:09.223250 systemd-networkd[767]: lo: Link UP Jul 2 06:54:09.224684 systemd-networkd[767]: lo: Gained carrier Jul 2 06:54:09.227263 systemd-networkd[767]: Enumeration completed Jul 2 06:54:09.228199 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:54:09.230384 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:09.230391 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:54:09.230916 systemd[1]: Reached target network.target - Network. Jul 2 06:54:09.234824 systemd-networkd[767]: eth0: Link UP Jul 2 06:54:09.234830 systemd-networkd[767]: eth0: Gained carrier Jul 2 06:54:09.234846 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:09.256878 ignition[680]: Ignition 2.18.0 Jul 2 06:54:09.256900 ignition[680]: Stage: fetch-offline Jul 2 06:54:09.257038 ignition[680]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:09.262093 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:54:09.257059 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 06:54:09.257376 ignition[680]: parsed url from cmdline: "" Jul 2 06:54:09.265596 systemd-networkd[767]: eth0: DHCPv4 address 10.244.24.146/30, gateway 10.244.24.145 acquired from 10.244.24.145 Jul 2 06:54:09.257384 ignition[680]: no config URL provided Jul 2 06:54:09.257399 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 06:54:09.257416 ignition[680]: no config at "/usr/lib/ignition/user.ign" Jul 2 06:54:09.257447 ignition[680]: failed to fetch config: resource requires networking Jul 2 06:54:09.257930 ignition[680]: Ignition finished successfully Jul 2 06:54:09.273835 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 06:54:09.305112 ignition[776]: Ignition 2.18.0 Jul 2 06:54:09.305137 ignition[776]: Stage: fetch Jul 2 06:54:09.305643 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:09.305687 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 06:54:09.305928 ignition[776]: parsed url from cmdline: "" Jul 2 06:54:09.305936 ignition[776]: no config URL provided Jul 2 06:54:09.305947 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 06:54:09.305964 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jul 2 06:54:09.306222 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 06:54:09.306272 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 06:54:09.306474 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 06:54:09.330981 ignition[776]: GET result: OK Jul 2 06:54:09.332214 ignition[776]: parsing config with SHA512: d2139a41db691ecca314bee44bae120ce179481e71f2e93f4cd0c11cd7c00746886dcc0e47d38bfae3bec5d3622cb929e609461e86b19f4eb8be08b181f7984a Jul 2 06:54:09.340910 unknown[776]: fetched base config from "system" Jul 2 06:54:09.342029 ignition[776]: fetch: fetch complete Jul 2 06:54:09.340927 unknown[776]: fetched base config from "system" Jul 2 06:54:09.342039 ignition[776]: fetch: fetch passed Jul 2 06:54:09.340937 unknown[776]: fetched user config from "openstack" Jul 2 06:54:09.342118 ignition[776]: Ignition finished successfully Jul 2 06:54:09.344239 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 06:54:09.353183 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 06:54:09.397254 ignition[784]: Ignition 2.18.0 Jul 2 06:54:09.397275 ignition[784]: Stage: kargs Jul 2 06:54:09.397652 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:09.397674 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 06:54:09.401326 ignition[784]: kargs: kargs passed Jul 2 06:54:09.403182 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 06:54:09.401469 ignition[784]: Ignition finished successfully Jul 2 06:54:09.411800 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 06:54:09.435735 ignition[791]: Ignition 2.18.0 Jul 2 06:54:09.435761 ignition[791]: Stage: disks Jul 2 06:54:09.436098 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:09.436120 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 06:54:09.437687 ignition[791]: disks: disks passed Jul 2 06:54:09.440362 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 06:54:09.437774 ignition[791]: Ignition finished successfully Jul 2 06:54:09.442059 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 06:54:09.443604 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:54:09.447146 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:54:09.448208 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:54:09.449591 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:54:09.463670 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 06:54:09.482866 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 2 06:54:09.486365 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 06:54:09.495617 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 06:54:09.637448 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 06:54:09.638272 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 06:54:09.639681 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 06:54:09.647602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:54:09.651036 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 06:54:09.652197 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 06:54:09.653665 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 2 06:54:09.658551 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 06:54:09.658604 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:54:09.671488 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Jul 2 06:54:09.675957 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 06:54:09.676003 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:54:09.675922 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 06:54:09.680465 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:54:09.689703 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 06:54:09.695450 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 06:54:09.698051 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:54:09.782597 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 06:54:09.790610 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jul 2 06:54:09.796857 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 06:54:09.806087 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 06:54:09.920224 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 06:54:09.931991 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 06:54:09.935675 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 06:54:09.946502 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 06:54:09.979742 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 06:54:09.985103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 06:54:09.990592 ignition[927]: INFO : Ignition 2.18.0 Jul 2 06:54:09.990592 ignition[927]: INFO : Stage: mount Jul 2 06:54:09.993023 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:09.993023 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 06:54:09.993023 ignition[927]: INFO : mount: mount passed Jul 2 06:54:09.993023 ignition[927]: INFO : Ignition finished successfully Jul 2 06:54:09.993989 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 06:54:11.240314 systemd-networkd[767]: eth0: Gained IPv6LL Jul 2 06:54:12.750080 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:624:24:19ff:fef4:1892/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:624:24:19ff:fef4:1892/64 assigned by NDisc. Jul 2 06:54:12.750098 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 2 06:54:16.857880 coreos-metadata[810]: Jul 02 06:54:16.857 WARN failed to locate config-drive, using the metadata service API instead Jul 2 06:54:16.881826 coreos-metadata[810]: Jul 02 06:54:16.881 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 06:54:16.944696 coreos-metadata[810]: Jul 02 06:54:16.944 INFO Fetch successful Jul 2 06:54:16.945774 coreos-metadata[810]: Jul 02 06:54:16.945 INFO wrote hostname srv-5ya4d.gb1.brightbox.com to /sysroot/etc/hostname Jul 2 06:54:16.947559 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 06:54:16.947755 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 2 06:54:16.955563 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 06:54:16.973659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:54:16.985466 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) Jul 2 06:54:16.990481 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 06:54:16.990526 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:54:16.991770 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:54:16.998481 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 06:54:17.001839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:54:17.030289 ignition[962]: INFO : Ignition 2.18.0 Jul 2 06:54:17.032059 ignition[962]: INFO : Stage: files Jul 2 06:54:17.032059 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:17.032059 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 06:54:17.034576 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jul 2 06:54:17.034576 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 06:54:17.034576 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 06:54:17.037951 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 06:54:17.039092 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 06:54:17.039092 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 06:54:17.038729 unknown[962]: wrote ssh authorized keys file for user: core Jul 2 06:54:17.042011 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:54:17.042011 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 06:54:17.307713 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 06:54:17.586831 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:54:17.586831 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 06:54:17.589514 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 06:54:18.216163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 06:54:18.642063 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 06:54:18.642063 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 06:54:18.650377 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 06:54:19.162670 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 06:54:21.878765 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 06:54:21.878765 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 06:54:21.883321 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:54:21.883321 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:54:21.883321 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 06:54:21.883321 ignition[962]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 2 06:54:21.883321 ignition[962]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 06:54:21.883321 ignition[962]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:54:21.883321 ignition[962]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:54:21.883321 ignition[962]: INFO : files: files passed Jul 2 06:54:21.883321 ignition[962]: INFO : Ignition finished successfully Jul 2 06:54:21.885253 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 06:54:21.899854 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 06:54:21.912676 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 06:54:21.918739 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 06:54:21.920101 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 06:54:21.931752 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:54:21.931752 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:54:21.934819 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:54:21.937190 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:54:21.938846 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 06:54:21.950805 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 06:54:21.996590 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 06:54:21.996786 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 06:54:21.999012 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 06:54:22.000114 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 06:54:22.001808 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 06:54:22.007872 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 06:54:22.030222 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:54:22.036857 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 06:54:22.055649 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:54:22.057693 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:54:22.058732 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 06:54:22.060248 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 06:54:22.060486 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:54:22.062279 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 06:54:22.063311 systemd[1]: Stopped target basic.target - Basic System. Jul 2 06:54:22.064837 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 06:54:22.066354 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:54:22.067968 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 06:54:22.069591 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 06:54:22.071224 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:54:22.072981 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 06:54:22.074519 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 06:54:22.076190 systemd[1]: Stopped target swap.target - Swaps. Jul 2 06:54:22.077633 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 06:54:22.077866 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:54:22.079829 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:54:22.080878 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 06:54:22.082370 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 06:54:22.082783 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 06:54:22.084130 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 06:54:22.084407 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 06:54:22.086388 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 06:54:22.086595 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:54:22.088287 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 06:54:22.088473 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 06:54:22.101891 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 06:54:22.103482 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 06:54:22.103852 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:54:22.113897 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 06:54:22.114751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 06:54:22.114979 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:54:22.129818 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 06:54:22.130033 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:54:22.145769 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 06:54:22.147972 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 06:54:22.148126 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 06:54:22.152512 ignition[1015]: INFO : Ignition 2.18.0 Jul 2 06:54:22.152512 ignition[1015]: INFO : Stage: umount Jul 2 06:54:22.152512 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:54:22.152512 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 06:54:22.157963 ignition[1015]: INFO : umount: umount passed Jul 2 06:54:22.157963 ignition[1015]: INFO : Ignition finished successfully Jul 2 06:54:22.155705 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 06:54:22.155906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 06:54:22.157762 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 06:54:22.157912 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 06:54:22.161997 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 06:54:22.162095 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 06:54:22.163799 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 06:54:22.163928 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 06:54:22.165337 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 06:54:22.165410 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 06:54:22.166872 systemd[1]: Stopped target network.target - Network. Jul 2 06:54:22.168243 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 06:54:22.168353 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:54:22.169865 systemd[1]: Stopped target paths.target - Path Units. Jul 2 06:54:22.170562 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 06:54:22.170736 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:54:22.172311 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 06:54:22.173754 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 06:54:22.175226 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 06:54:22.175374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:54:22.176628 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 06:54:22.176693 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:54:22.178218 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 06:54:22.178329 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 06:54:22.180911 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 06:54:22.180990 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 06:54:22.182624 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 06:54:22.182759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 06:54:22.184576 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 06:54:22.186693 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 06:54:22.189714 systemd-networkd[767]: eth0: DHCPv6 lease lost Jul 2 06:54:22.193973 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 06:54:22.194519 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 06:54:22.198094 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 06:54:22.198575 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 06:54:22.205111 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 06:54:22.205449 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:54:22.211608 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 06:54:22.212341 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 06:54:22.212420 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:54:22.213346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 06:54:22.213413 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:54:22.215295 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 06:54:22.215371 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 06:54:22.217504 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 06:54:22.217574 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:54:22.220984 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:54:22.229987 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 06:54:22.230226 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:54:22.235118 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 06:54:22.236120 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 06:54:22.237630 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 06:54:22.237692 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:54:22.238731 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 06:54:22.238802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:54:22.239844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 06:54:22.239911 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 06:54:22.242441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 06:54:22.242542 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:54:22.249740 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 06:54:22.250648 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 06:54:22.250749 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:54:22.252988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 06:54:22.253086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 06:54:22.254557 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 06:54:22.254753 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 06:54:22.271795 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 06:54:22.271976 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 06:54:22.274665 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 06:54:22.280742 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 06:54:22.302477 systemd[1]: Switching root. Jul 2 06:54:22.342214 systemd-journald[201]: Journal stopped Jul 2 06:54:23.918590 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jul 2 06:54:23.918737 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 06:54:23.918780 kernel: SELinux: policy capability open_perms=1 Jul 2 06:54:23.918803 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 06:54:23.918824 kernel: SELinux: policy capability always_check_network=0 Jul 2 06:54:23.918853 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 06:54:23.918875 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 06:54:23.918902 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 06:54:23.918928 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 06:54:23.918949 kernel: audit: type=1403 audit(1719903262.579:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 06:54:23.918984 systemd[1]: Successfully loaded SELinux policy in 56.057ms. Jul 2 06:54:23.919021 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.742ms. Jul 2 06:54:23.919052 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 06:54:23.919082 systemd[1]: Detected virtualization kvm. Jul 2 06:54:23.919105 systemd[1]: Detected architecture x86-64. Jul 2 06:54:23.919133 systemd[1]: Detected first boot. Jul 2 06:54:23.919162 systemd[1]: Hostname set to . Jul 2 06:54:23.919185 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:54:23.919208 zram_generator::config[1057]: No configuration found. Jul 2 06:54:23.919244 systemd[1]: Populated /etc with preset unit settings. Jul 2 06:54:23.919270 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 06:54:23.919307 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 06:54:23.919331 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 06:54:23.919354 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 06:54:23.919376 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 06:54:23.919397 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 06:54:23.919417 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 06:54:23.919478 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 06:54:23.919502 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 06:54:23.919541 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 06:54:23.919564 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 06:54:23.919593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 06:54:23.919617 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:54:23.919638 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 06:54:23.919666 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 06:54:23.919690 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 06:54:23.919712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 06:54:23.919734 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 06:54:23.919762 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 06:54:23.919786 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 06:54:23.919808 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 06:54:23.919829 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 06:54:23.919865 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 06:54:23.919912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:54:23.919947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:54:23.919970 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:54:23.919992 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:54:23.920013 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 06:54:23.920034 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 06:54:23.920056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:54:23.920091 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:54:23.920119 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:54:23.920140 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 06:54:23.920162 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 06:54:23.920183 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 06:54:23.920205 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 06:54:23.920226 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:23.920262 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 06:54:23.920284 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 06:54:23.920311 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 06:54:23.920334 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 06:54:23.920356 systemd[1]: Reached target machines.target - Containers. Jul 2 06:54:23.920376 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 06:54:23.920398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:54:23.920441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:54:23.920466 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 06:54:23.920488 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:54:23.920509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:54:23.920538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:54:23.920560 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 06:54:23.920581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:54:23.920603 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 06:54:23.920651 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 06:54:23.920675 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 06:54:23.920695 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 06:54:23.920730 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 06:54:23.920757 kernel: fuse: init (API version 7.39) Jul 2 06:54:23.920820 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:54:23.920851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:54:23.920879 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 06:54:23.920911 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 06:54:23.920934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:54:23.920957 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 06:54:23.920978 systemd[1]: Stopped verity-setup.service. Jul 2 06:54:23.920999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:23.921029 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 06:54:23.921052 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 06:54:23.921072 kernel: ACPI: bus type drm_connector registered Jul 2 06:54:23.921093 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 06:54:23.921114 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 06:54:23.921140 kernel: loop: module loaded Jul 2 06:54:23.921161 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 06:54:23.921184 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 06:54:23.921249 systemd-journald[1149]: Collecting audit messages is disabled. Jul 2 06:54:23.921290 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 06:54:23.921313 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:54:23.921334 systemd-journald[1149]: Journal started Jul 2 06:54:23.921390 systemd-journald[1149]: Runtime Journal (/run/log/journal/7b1e6d62ec284f73bafd4a50b2241e95) is 4.7M, max 38.0M, 33.2M free. Jul 2 06:54:23.460168 systemd[1]: Queued start job for default target multi-user.target. Jul 2 06:54:23.483837 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 06:54:23.484642 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 06:54:23.927450 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:54:23.928637 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 06:54:23.928905 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 06:54:23.930102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:54:23.930339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:54:23.931707 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:54:23.931929 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:54:23.933110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:54:23.933392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:54:23.934662 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 06:54:23.934889 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 06:54:23.936318 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:54:23.936597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:54:23.937808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:54:23.939026 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 06:54:23.940267 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 06:54:23.956863 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 06:54:23.969125 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 06:54:23.974598 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 06:54:23.977321 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 06:54:23.977389 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:54:23.979694 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 06:54:23.988805 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 06:54:23.999791 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 06:54:24.002859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:54:24.013774 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 06:54:24.018036 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 06:54:24.020547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:54:24.022688 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 06:54:24.023549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:54:24.027660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:54:24.037703 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 06:54:24.045656 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 06:54:24.051248 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 06:54:24.054731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 06:54:24.056003 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 06:54:24.107174 systemd-journald[1149]: Time spent on flushing to /var/log/journal/7b1e6d62ec284f73bafd4a50b2241e95 is 39.898ms for 1141 entries. Jul 2 06:54:24.107174 systemd-journald[1149]: System Journal (/var/log/journal/7b1e6d62ec284f73bafd4a50b2241e95) is 8.0M, max 584.8M, 576.8M free. Jul 2 06:54:24.184219 systemd-journald[1149]: Received client request to flush runtime journal. Jul 2 06:54:24.184310 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 06:54:24.184340 kernel: block loop0: the capability attribute has been deprecated. Jul 2 06:54:24.185162 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 06:54:24.166826 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 06:54:24.168043 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 06:54:24.177720 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 06:54:24.193946 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 06:54:24.229455 kernel: loop1: detected capacity change from 0 to 8 Jul 2 06:54:24.240033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:54:24.249279 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 06:54:24.251678 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 06:54:24.269453 kernel: loop2: detected capacity change from 0 to 209816 Jul 2 06:54:24.291827 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 06:54:24.304697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 06:54:24.317549 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:54:24.333458 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 06:54:24.335711 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 06:54:24.406654 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jul 2 06:54:24.411456 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 06:54:24.409979 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jul 2 06:54:24.423283 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 06:54:24.436375 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:54:24.445366 kernel: loop5: detected capacity change from 0 to 8 Jul 2 06:54:24.454777 kernel: loop6: detected capacity change from 0 to 209816 Jul 2 06:54:24.482484 kernel: loop7: detected capacity change from 0 to 80568 Jul 2 06:54:24.497743 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 2 06:54:24.498688 (sd-merge)[1213]: Merged extensions into '/usr'. Jul 2 06:54:24.508730 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 06:54:24.508926 systemd[1]: Reloading... Jul 2 06:54:24.652505 zram_generator::config[1235]: No configuration found. Jul 2 06:54:24.854342 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 06:54:24.959702 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:54:25.032066 systemd[1]: Reloading finished in 521 ms. Jul 2 06:54:25.079457 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 06:54:25.084047 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 06:54:25.098916 systemd[1]: Starting ensure-sysext.service... Jul 2 06:54:25.108820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:54:25.134738 systemd[1]: Reloading requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Jul 2 06:54:25.134770 systemd[1]: Reloading... Jul 2 06:54:25.200862 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 06:54:25.202518 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 06:54:25.204157 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 06:54:25.207690 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jul 2 06:54:25.207816 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jul 2 06:54:25.213911 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 06:54:25.213932 systemd-tmpfiles[1295]: Skipping /boot Jul 2 06:54:25.236200 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 06:54:25.236233 systemd-tmpfiles[1295]: Skipping /boot Jul 2 06:54:25.258463 zram_generator::config[1320]: No configuration found. Jul 2 06:54:25.459760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:54:25.529062 systemd[1]: Reloading finished in 393 ms. Jul 2 06:54:25.550245 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 06:54:25.556062 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:54:25.571622 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:54:25.577691 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 06:54:25.581624 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 06:54:25.587654 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:54:25.592099 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:54:25.595739 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 06:54:25.602327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.603684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:54:25.611749 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:54:25.616748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:54:25.620740 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:54:25.622629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:54:25.622802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.625500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.625767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:54:25.625984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:54:25.626111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.634755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.635064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:54:25.641792 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:54:25.643682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:54:25.643859 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:54:25.663512 systemd[1]: Finished ensure-sysext.service. Jul 2 06:54:25.669147 systemd-udevd[1383]: Using default interface naming scheme 'v255'. Jul 2 06:54:25.674654 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 06:54:25.681814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:54:25.682334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:54:25.702387 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 06:54:25.705768 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 06:54:25.708578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:54:25.715384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:54:25.715678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:54:25.754677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:54:25.755514 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:54:25.769730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:54:25.771262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:54:25.773293 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:54:25.776996 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:54:25.777251 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:54:25.793471 augenrules[1427]: No rules Jul 2 06:54:25.798530 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:54:25.800518 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 06:54:25.802882 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 06:54:25.828692 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 06:54:25.830517 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 06:54:25.892102 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 06:54:25.892454 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1412) Jul 2 06:54:25.898390 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 06:54:25.932460 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1410) Jul 2 06:54:25.963419 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 06:54:26.061513 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 06:54:26.123351 systemd-networkd[1415]: lo: Link UP Jul 2 06:54:26.127496 systemd-networkd[1415]: lo: Gained carrier Jul 2 06:54:26.132317 systemd-networkd[1415]: Enumeration completed Jul 2 06:54:26.132615 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:54:26.140065 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:26.142069 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:54:26.143670 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:26.144337 systemd-networkd[1415]: eth0: Link UP Jul 2 06:54:26.144568 systemd-networkd[1415]: eth0: Gained carrier Jul 2 06:54:26.144668 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:54:26.148132 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 06:54:26.154601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 06:54:26.162547 systemd-networkd[1415]: eth0: DHCPv4 address 10.244.24.146/30, gateway 10.244.24.145 acquired from 10.244.24.145 Jul 2 06:54:26.163673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 06:54:26.206103 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 06:54:26.224646 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 06:54:26.227834 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 06:54:26.232400 systemd-resolved[1382]: Positive Trust Anchors: Jul 2 06:54:26.232442 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:54:26.232489 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 06:54:26.241903 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 06:54:26.240802 systemd-resolved[1382]: Using system hostname 'srv-5ya4d.gb1.brightbox.com'. Jul 2 06:54:26.244030 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:54:26.244983 systemd[1]: Reached target network.target - Network. Jul 2 06:54:26.245690 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:54:26.253492 kernel: ACPI: button: Power Button [PWRF] Jul 2 06:54:26.312503 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 06:54:26.317446 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 2 06:54:26.322890 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 2 06:54:26.325544 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 2 06:54:26.338924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 06:54:26.509531 systemd-timesyncd[1396]: Contacted time server 85.199.214.102:123 (0.flatcar.pool.ntp.org). Jul 2 06:54:26.509854 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2024-07-02 06:54:26.471138 UTC. Jul 2 06:54:26.529352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 06:54:26.594389 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 06:54:26.602731 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 06:54:26.643532 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:54:26.680651 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 06:54:26.682848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:54:26.683677 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:54:26.684750 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 06:54:26.685693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 06:54:26.686964 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 06:54:26.687910 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 06:54:26.688739 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 06:54:26.689530 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 06:54:26.689592 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:54:26.690255 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:54:26.692663 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 06:54:26.695543 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 06:54:26.700675 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 06:54:26.703931 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 06:54:26.705578 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 06:54:26.706494 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:54:26.707166 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:54:26.707931 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:54:26.707994 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:54:26.715666 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 06:54:26.721811 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 06:54:26.727478 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:54:26.726664 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 06:54:26.735757 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 06:54:26.741667 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 06:54:26.742468 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 06:54:26.746657 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 06:54:26.751733 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 06:54:26.754922 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 06:54:26.758128 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 06:54:26.768754 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 06:54:26.770511 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 06:54:26.771403 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 06:54:26.781642 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 06:54:26.793621 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 06:54:26.811274 jq[1473]: false Jul 2 06:54:26.813996 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 06:54:26.814452 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 06:54:26.835692 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 06:54:26.850698 jq[1483]: true Jul 2 06:54:26.875896 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 06:54:26.876238 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 06:54:26.881062 dbus-daemon[1472]: [system] SELinux support is enabled Jul 2 06:54:26.881682 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 06:54:26.886617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 06:54:26.886668 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 06:54:26.890042 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 06:54:26.890089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 06:54:26.909494 jq[1493]: true Jul 2 06:54:26.912637 dbus-daemon[1472]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1415 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 06:54:26.918744 tar[1487]: linux-amd64/helm Jul 2 06:54:26.929723 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 06:54:26.947023 update_engine[1481]: I0702 06:54:26.946868 1481 main.cc:92] Flatcar Update Engine starting Jul 2 06:54:26.951060 extend-filesystems[1474]: Found loop4 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found loop5 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found loop6 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found loop7 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found vda Jul 2 06:54:26.951060 extend-filesystems[1474]: Found vda1 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found vda2 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found vda3 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found usr Jul 2 06:54:26.951060 extend-filesystems[1474]: Found vda4 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found vda6 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found vda7 Jul 2 06:54:26.951060 extend-filesystems[1474]: Found vda9 Jul 2 06:54:26.951060 extend-filesystems[1474]: Checking size of /dev/vda9 Jul 2 06:54:27.067562 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 2 06:54:27.071508 update_engine[1481]: I0702 06:54:26.968540 1481 update_check_scheduler.cc:74] Next update check in 11m33s Jul 2 06:54:26.965411 systemd[1]: Started update-engine.service - Update Engine. Jul 2 06:54:27.072405 extend-filesystems[1474]: Resized partition /dev/vda9 Jul 2 06:54:26.976800 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 06:54:27.086354 extend-filesystems[1516]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 06:54:26.977962 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 06:54:26.978211 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 06:54:27.015717 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 06:54:27.129022 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (Power Button) Jul 2 06:54:27.205560 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1409) Jul 2 06:54:27.129205 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 06:54:27.134450 systemd-logind[1479]: New seat seat0. Jul 2 06:54:27.192347 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 06:54:27.256132 bash[1529]: Updated "/home/core/.ssh/authorized_keys" Jul 2 06:54:27.259860 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 06:54:27.304030 systemd[1]: Starting sshkeys.service... Jul 2 06:54:27.442287 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 06:54:27.452324 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 06:54:27.469920 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 06:54:27.488624 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 06:54:27.488322 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 06:54:27.496712 dbus-daemon[1472]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1506 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 06:54:27.505750 extend-filesystems[1516]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 06:54:27.505750 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 06:54:27.505750 extend-filesystems[1516]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 06:54:27.522543 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Jul 2 06:54:27.510923 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 06:54:27.521211 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 06:54:27.521561 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 06:54:27.568062 polkitd[1546]: Started polkitd version 121 Jul 2 06:54:27.573634 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 06:54:27.593721 polkitd[1546]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 06:54:27.593882 polkitd[1546]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 06:54:27.599185 polkitd[1546]: Finished loading, compiling and executing 2 rules Jul 2 06:54:27.602311 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 06:54:27.602693 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 06:54:27.607315 polkitd[1546]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 06:54:27.647231 systemd-hostnamed[1506]: Hostname set to (static) Jul 2 06:54:27.664479 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 06:54:27.720516 containerd[1497]: time="2024-07-02T06:54:27.720220395Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 06:54:27.761153 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 06:54:27.781467 containerd[1497]: time="2024-07-02T06:54:27.781103768Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 06:54:27.781467 containerd[1497]: time="2024-07-02T06:54:27.781190212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:27.791477 containerd[1497]: time="2024-07-02T06:54:27.790184152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:54:27.791477 containerd[1497]: time="2024-07-02T06:54:27.790252600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:27.791477 containerd[1497]: time="2024-07-02T06:54:27.790678529Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:54:27.791477 containerd[1497]: time="2024-07-02T06:54:27.790708780Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 06:54:27.791477 containerd[1497]: time="2024-07-02T06:54:27.790870981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:27.791477 containerd[1497]: time="2024-07-02T06:54:27.791001266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:54:27.791477 containerd[1497]: time="2024-07-02T06:54:27.791025085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:27.791477 containerd[1497]: time="2024-07-02T06:54:27.791184795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:27.796092 containerd[1497]: time="2024-07-02T06:54:27.796058053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:27.796405 containerd[1497]: time="2024-07-02T06:54:27.796372333Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 06:54:27.796526 containerd[1497]: time="2024-07-02T06:54:27.796501457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:54:27.797677 containerd[1497]: time="2024-07-02T06:54:27.797191112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:54:27.797677 containerd[1497]: time="2024-07-02T06:54:27.797228390Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 06:54:27.797677 containerd[1497]: time="2024-07-02T06:54:27.797339436Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 06:54:27.797677 containerd[1497]: time="2024-07-02T06:54:27.797361903Z" level=info msg="metadata content store policy set" policy=shared Jul 2 06:54:27.804853 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 06:54:27.807969 containerd[1497]: time="2024-07-02T06:54:27.807903283Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 06:54:27.807969 containerd[1497]: time="2024-07-02T06:54:27.807964857Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 06:54:27.808313 containerd[1497]: time="2024-07-02T06:54:27.807994220Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 06:54:27.808313 containerd[1497]: time="2024-07-02T06:54:27.808072339Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 06:54:27.808313 containerd[1497]: time="2024-07-02T06:54:27.808099381Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 06:54:27.808313 containerd[1497]: time="2024-07-02T06:54:27.808119297Z" level=info msg="NRI interface is disabled by configuration." Jul 2 06:54:27.808313 containerd[1497]: time="2024-07-02T06:54:27.808140461Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 06:54:27.808548 containerd[1497]: time="2024-07-02T06:54:27.808374969Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 06:54:27.808548 containerd[1497]: time="2024-07-02T06:54:27.808403723Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 06:54:27.808548 containerd[1497]: time="2024-07-02T06:54:27.808451852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 06:54:27.808548 containerd[1497]: time="2024-07-02T06:54:27.808478809Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 06:54:27.808548 containerd[1497]: time="2024-07-02T06:54:27.808502023Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 06:54:27.808548 containerd[1497]: time="2024-07-02T06:54:27.808529451Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 06:54:27.808759 containerd[1497]: time="2024-07-02T06:54:27.808560401Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 06:54:27.808759 containerd[1497]: time="2024-07-02T06:54:27.808603237Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 06:54:27.808759 containerd[1497]: time="2024-07-02T06:54:27.808630397Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 06:54:27.808759 containerd[1497]: time="2024-07-02T06:54:27.808655113Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 06:54:27.808759 containerd[1497]: time="2024-07-02T06:54:27.808676725Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 06:54:27.808759 containerd[1497]: time="2024-07-02T06:54:27.808699179Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 06:54:27.808979 containerd[1497]: time="2024-07-02T06:54:27.808875733Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 06:54:27.810159 containerd[1497]: time="2024-07-02T06:54:27.809198993Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 06:54:27.810159 containerd[1497]: time="2024-07-02T06:54:27.809253860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.810159 containerd[1497]: time="2024-07-02T06:54:27.809296023Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 06:54:27.810159 containerd[1497]: time="2024-07-02T06:54:27.809336946Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815609394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815651487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815675574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815698502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815723065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815745625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815768227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815791496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.815814890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.816063898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.816094859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.816116173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.816136980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.816156532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816181 containerd[1497]: time="2024-07-02T06:54:27.816186450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.817909 containerd[1497]: time="2024-07-02T06:54:27.816212389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.817909 containerd[1497]: time="2024-07-02T06:54:27.816234464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 06:54:27.816861 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 06:54:27.820086 containerd[1497]: time="2024-07-02T06:54:27.819075439Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 06:54:27.820086 containerd[1497]: time="2024-07-02T06:54:27.819178113Z" level=info msg="Connect containerd service" Jul 2 06:54:27.820086 containerd[1497]: time="2024-07-02T06:54:27.819248923Z" level=info msg="using legacy CRI server" Jul 2 06:54:27.820086 containerd[1497]: time="2024-07-02T06:54:27.819267867Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 06:54:27.822141 containerd[1497]: time="2024-07-02T06:54:27.821502585Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.823077232Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.823149454Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.823186273Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.823207768Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.823241724Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824122997Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824252944Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824294861Z" level=info msg="Start subscribing containerd event" Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824363105Z" level=info msg="Start recovering state" Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824514524Z" level=info msg="Start event monitor" Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824559601Z" level=info msg="Start snapshots syncer" Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824582184Z" level=info msg="Start cni network conf syncer for default" Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824596970Z" level=info msg="Start streaming server" Jul 2 06:54:27.825832 containerd[1497]: time="2024-07-02T06:54:27.824703798Z" level=info msg="containerd successfully booted in 0.109354s" Jul 2 06:54:27.825859 systemd[1]: Started sshd@0-10.244.24.146:22-139.178.89.65:44382.service - OpenSSH per-connection server daemon (139.178.89.65:44382). Jul 2 06:54:27.827603 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 06:54:27.856275 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 06:54:27.857563 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 06:54:27.870845 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 06:54:27.905672 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 06:54:27.916254 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 06:54:27.929623 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 06:54:27.931149 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 06:54:27.945083 systemd-networkd[1415]: eth0: Gained IPv6LL Jul 2 06:54:27.950184 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 06:54:27.952807 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 06:54:27.963757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:54:27.973279 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 06:54:28.014193 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 06:54:28.120605 tar[1487]: linux-amd64/LICENSE Jul 2 06:54:28.121668 tar[1487]: linux-amd64/README.md Jul 2 06:54:28.140920 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 06:54:28.733397 sshd[1570]: Accepted publickey for core from 139.178.89.65 port 44382 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:28.736663 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:28.757929 systemd-logind[1479]: New session 1 of user core. Jul 2 06:54:28.759618 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 06:54:28.770358 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 06:54:28.799102 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 06:54:28.812498 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 06:54:28.824930 (systemd)[1598]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:28.897718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:54:28.900850 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 06:54:28.990546 systemd[1598]: Queued start job for default target default.target. Jul 2 06:54:28.996713 systemd[1598]: Created slice app.slice - User Application Slice. Jul 2 06:54:28.996764 systemd[1598]: Reached target paths.target - Paths. Jul 2 06:54:28.996790 systemd[1598]: Reached target timers.target - Timers. Jul 2 06:54:29.000619 systemd[1598]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 06:54:29.029667 systemd[1598]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 06:54:29.029905 systemd[1598]: Reached target sockets.target - Sockets. Jul 2 06:54:29.029933 systemd[1598]: Reached target basic.target - Basic System. Jul 2 06:54:29.030022 systemd[1598]: Reached target default.target - Main User Target. Jul 2 06:54:29.030080 systemd[1598]: Startup finished in 192ms. Jul 2 06:54:29.030363 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 06:54:29.039073 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 06:54:29.163167 systemd-networkd[1415]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:624:24:19ff:fef4:1892/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:624:24:19ff:fef4:1892/64 assigned by NDisc. Jul 2 06:54:29.163184 systemd-networkd[1415]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 2 06:54:29.666657 systemd[1]: Started sshd@1-10.244.24.146:22-139.178.89.65:37108.service - OpenSSH per-connection server daemon (139.178.89.65:37108). Jul 2 06:54:29.735683 kubelet[1608]: E0702 06:54:29.735400 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:54:29.739526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:54:29.739858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:54:29.740603 systemd[1]: kubelet.service: Consumed 1.128s CPU time. Jul 2 06:54:30.551300 sshd[1622]: Accepted publickey for core from 139.178.89.65 port 37108 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:30.553659 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:30.562054 systemd-logind[1479]: New session 2 of user core. Jul 2 06:54:30.572895 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 06:54:31.157079 sshd[1622]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:31.163075 systemd[1]: sshd@1-10.244.24.146:22-139.178.89.65:37108.service: Deactivated successfully. Jul 2 06:54:31.165565 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 06:54:31.166774 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Jul 2 06:54:31.168175 systemd-logind[1479]: Removed session 2. Jul 2 06:54:31.319160 systemd[1]: Started sshd@2-10.244.24.146:22-139.178.89.65:37114.service - OpenSSH per-connection server daemon (139.178.89.65:37114). Jul 2 06:54:32.191841 sshd[1632]: Accepted publickey for core from 139.178.89.65 port 37114 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:32.194110 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:32.203374 systemd-logind[1479]: New session 3 of user core. Jul 2 06:54:32.212154 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 06:54:32.805074 sshd[1632]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:32.809664 systemd[1]: sshd@2-10.244.24.146:22-139.178.89.65:37114.service: Deactivated successfully. Jul 2 06:54:32.812516 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 06:54:32.814663 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Jul 2 06:54:32.816317 systemd-logind[1479]: Removed session 3. Jul 2 06:54:32.986274 login[1580]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 06:54:32.991286 login[1577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 06:54:32.994022 systemd-logind[1479]: New session 4 of user core. Jul 2 06:54:33.005803 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 06:54:33.009850 systemd-logind[1479]: New session 5 of user core. Jul 2 06:54:33.025798 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 06:54:33.978052 coreos-metadata[1471]: Jul 02 06:54:33.977 WARN failed to locate config-drive, using the metadata service API instead Jul 2 06:54:34.004980 coreos-metadata[1471]: Jul 02 06:54:34.004 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 2 06:54:34.013949 coreos-metadata[1471]: Jul 02 06:54:34.013 INFO Fetch failed with 404: resource not found Jul 2 06:54:34.013949 coreos-metadata[1471]: Jul 02 06:54:34.013 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 06:54:34.016687 coreos-metadata[1471]: Jul 02 06:54:34.016 INFO Fetch successful Jul 2 06:54:34.016855 coreos-metadata[1471]: Jul 02 06:54:34.016 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 2 06:54:34.063833 coreos-metadata[1471]: Jul 02 06:54:34.063 INFO Fetch successful Jul 2 06:54:34.064166 coreos-metadata[1471]: Jul 02 06:54:34.064 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 2 06:54:34.122560 coreos-metadata[1471]: Jul 02 06:54:34.122 INFO Fetch successful Jul 2 06:54:34.122868 coreos-metadata[1471]: Jul 02 06:54:34.122 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 2 06:54:34.186681 coreos-metadata[1471]: Jul 02 06:54:34.186 INFO Fetch successful Jul 2 06:54:34.187117 coreos-metadata[1471]: Jul 02 06:54:34.187 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 2 06:54:34.234015 coreos-metadata[1471]: Jul 02 06:54:34.233 INFO Fetch successful Jul 2 06:54:34.283335 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 06:54:34.287144 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 06:54:34.618970 coreos-metadata[1541]: Jul 02 06:54:34.618 WARN failed to locate config-drive, using the metadata service API instead Jul 2 06:54:34.641737 coreos-metadata[1541]: Jul 02 06:54:34.641 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 06:54:34.688941 coreos-metadata[1541]: Jul 02 06:54:34.688 INFO Fetch successful Jul 2 06:54:34.688941 coreos-metadata[1541]: Jul 02 06:54:34.688 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 06:54:34.790915 coreos-metadata[1541]: Jul 02 06:54:34.790 INFO Fetch successful Jul 2 06:54:34.794635 unknown[1541]: wrote ssh authorized keys file for user: core Jul 2 06:54:34.823500 update-ssh-keys[1666]: Updated "/home/core/.ssh/authorized_keys" Jul 2 06:54:34.823921 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 06:54:34.827977 systemd[1]: Finished sshkeys.service. Jul 2 06:54:34.831982 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 06:54:34.835597 systemd[1]: Startup finished in 1.559s (kernel) + 16.812s (initrd) + 12.308s (userspace) = 30.681s. Jul 2 06:54:39.990357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 06:54:40.007870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:54:40.177157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:54:40.192990 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 06:54:40.265094 kubelet[1677]: E0702 06:54:40.264865 1677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:54:40.270384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:54:40.270673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:54:42.949863 systemd[1]: Started sshd@3-10.244.24.146:22-139.178.89.65:53794.service - OpenSSH per-connection server daemon (139.178.89.65:53794). Jul 2 06:54:43.812714 sshd[1686]: Accepted publickey for core from 139.178.89.65 port 53794 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:43.814803 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:43.822180 systemd-logind[1479]: New session 6 of user core. Jul 2 06:54:43.830840 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 06:54:44.418174 sshd[1686]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:44.422541 systemd[1]: sshd@3-10.244.24.146:22-139.178.89.65:53794.service: Deactivated successfully. Jul 2 06:54:44.424877 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 06:54:44.426954 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Jul 2 06:54:44.428261 systemd-logind[1479]: Removed session 6. Jul 2 06:54:44.579913 systemd[1]: Started sshd@4-10.244.24.146:22-139.178.89.65:53802.service - OpenSSH per-connection server daemon (139.178.89.65:53802). Jul 2 06:54:45.449469 sshd[1693]: Accepted publickey for core from 139.178.89.65 port 53802 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:45.452407 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:45.460975 systemd-logind[1479]: New session 7 of user core. Jul 2 06:54:45.466820 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 06:54:46.055546 sshd[1693]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:46.061711 systemd[1]: sshd@4-10.244.24.146:22-139.178.89.65:53802.service: Deactivated successfully. Jul 2 06:54:46.064257 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 06:54:46.065951 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Jul 2 06:54:46.069243 systemd-logind[1479]: Removed session 7. Jul 2 06:54:46.211886 systemd[1]: Started sshd@5-10.244.24.146:22-139.178.89.65:53812.service - OpenSSH per-connection server daemon (139.178.89.65:53812). Jul 2 06:54:47.079641 sshd[1700]: Accepted publickey for core from 139.178.89.65 port 53812 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:47.081989 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:47.088965 systemd-logind[1479]: New session 8 of user core. Jul 2 06:54:47.096716 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 06:54:47.691543 sshd[1700]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:47.697056 systemd[1]: sshd@5-10.244.24.146:22-139.178.89.65:53812.service: Deactivated successfully. Jul 2 06:54:47.700138 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 06:54:47.702180 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Jul 2 06:54:47.703888 systemd-logind[1479]: Removed session 8. Jul 2 06:54:47.847143 systemd[1]: Started sshd@6-10.244.24.146:22-139.178.89.65:53826.service - OpenSSH per-connection server daemon (139.178.89.65:53826). Jul 2 06:54:48.736446 sshd[1707]: Accepted publickey for core from 139.178.89.65 port 53826 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:48.738559 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:48.745786 systemd-logind[1479]: New session 9 of user core. Jul 2 06:54:48.758801 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 06:54:49.218244 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 06:54:49.219331 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:54:49.237601 sudo[1710]: pam_unix(sudo:session): session closed for user root Jul 2 06:54:49.380947 sshd[1707]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:49.388046 systemd[1]: sshd@6-10.244.24.146:22-139.178.89.65:53826.service: Deactivated successfully. Jul 2 06:54:49.390905 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 06:54:49.392093 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Jul 2 06:54:49.393818 systemd-logind[1479]: Removed session 9. Jul 2 06:54:49.534827 systemd[1]: Started sshd@7-10.244.24.146:22-139.178.89.65:60848.service - OpenSSH per-connection server daemon (139.178.89.65:60848). Jul 2 06:54:50.472203 sshd[1715]: Accepted publickey for core from 139.178.89.65 port 60848 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:50.474584 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:50.476169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 06:54:50.491956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:54:50.497693 systemd-logind[1479]: New session 10 of user core. Jul 2 06:54:50.502978 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 06:54:50.667791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:54:50.684377 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 06:54:50.787257 kubelet[1726]: E0702 06:54:50.786940 1726 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:54:50.790282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:54:50.790586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:54:50.939957 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 06:54:50.940416 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:54:50.946532 sudo[1735]: pam_unix(sudo:session): session closed for user root Jul 2 06:54:50.954973 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 06:54:50.956025 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:54:50.980831 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 06:54:50.983303 auditctl[1738]: No rules Jul 2 06:54:50.983864 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 06:54:50.984150 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 06:54:50.991908 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:54:51.036181 augenrules[1756]: No rules Jul 2 06:54:51.037241 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:54:51.039346 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 2 06:54:51.180019 sshd[1715]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:51.185702 systemd[1]: sshd@7-10.244.24.146:22-139.178.89.65:60848.service: Deactivated successfully. Jul 2 06:54:51.188257 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 06:54:51.189336 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Jul 2 06:54:51.190931 systemd-logind[1479]: Removed session 10. Jul 2 06:54:51.337074 systemd[1]: Started sshd@8-10.244.24.146:22-139.178.89.65:60850.service - OpenSSH per-connection server daemon (139.178.89.65:60850). Jul 2 06:54:52.203197 sshd[1764]: Accepted publickey for core from 139.178.89.65 port 60850 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:54:52.205244 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:52.212556 systemd-logind[1479]: New session 11 of user core. Jul 2 06:54:52.218688 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 06:54:52.671027 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 06:54:52.671545 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:54:52.891928 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 06:54:52.892140 (dockerd)[1776]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 06:54:53.316457 dockerd[1776]: time="2024-07-02T06:54:53.316322689Z" level=info msg="Starting up" Jul 2 06:54:53.387870 dockerd[1776]: time="2024-07-02T06:54:53.387181987Z" level=info msg="Loading containers: start." Jul 2 06:54:53.568509 kernel: Initializing XFRM netlink socket Jul 2 06:54:53.678720 systemd-networkd[1415]: docker0: Link UP Jul 2 06:54:53.694996 dockerd[1776]: time="2024-07-02T06:54:53.694911053Z" level=info msg="Loading containers: done." Jul 2 06:54:53.792925 dockerd[1776]: time="2024-07-02T06:54:53.792844689Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 06:54:53.793260 dockerd[1776]: time="2024-07-02T06:54:53.793216671Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 06:54:53.793455 dockerd[1776]: time="2024-07-02T06:54:53.793386212Z" level=info msg="Daemon has completed initialization" Jul 2 06:54:53.840372 dockerd[1776]: time="2024-07-02T06:54:53.838589623Z" level=info msg="API listen on /run/docker.sock" Jul 2 06:54:53.839564 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 06:54:55.223240 containerd[1497]: time="2024-07-02T06:54:55.223125569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 06:54:56.205967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914642790.mount: Deactivated successfully. Jul 2 06:54:58.903462 containerd[1497]: time="2024-07-02T06:54:58.902846454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:58.904657 containerd[1497]: time="2024-07-02T06:54:58.904598311Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605186" Jul 2 06:54:58.905353 containerd[1497]: time="2024-07-02T06:54:58.905290978Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:58.910452 containerd[1497]: time="2024-07-02T06:54:58.909524132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:58.911579 containerd[1497]: time="2024-07-02T06:54:58.911286139Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.688046879s" Jul 2 06:54:58.911579 containerd[1497]: time="2024-07-02T06:54:58.911348624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 06:54:58.942536 containerd[1497]: time="2024-07-02T06:54:58.942383876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 06:54:59.177418 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 06:55:00.899171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 06:55:00.909520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:01.148790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:01.154158 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 06:55:01.239041 kubelet[1983]: E0702 06:55:01.238869 1983 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:55:01.243008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:55:01.243281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:55:01.747119 containerd[1497]: time="2024-07-02T06:55:01.747037498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:01.749409 containerd[1497]: time="2024-07-02T06:55:01.749353586Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719499" Jul 2 06:55:01.750766 containerd[1497]: time="2024-07-02T06:55:01.750703915Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:01.754900 containerd[1497]: time="2024-07-02T06:55:01.754816905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:01.756943 containerd[1497]: time="2024-07-02T06:55:01.756729552Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.814248975s" Jul 2 06:55:01.756943 containerd[1497]: time="2024-07-02T06:55:01.756780656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 06:55:01.789544 containerd[1497]: time="2024-07-02T06:55:01.789470790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 06:55:03.460176 containerd[1497]: time="2024-07-02T06:55:03.459977481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:03.461451 containerd[1497]: time="2024-07-02T06:55:03.461387118Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925513" Jul 2 06:55:03.462467 containerd[1497]: time="2024-07-02T06:55:03.462261537Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:03.466923 containerd[1497]: time="2024-07-02T06:55:03.466841821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:03.468743 containerd[1497]: time="2024-07-02T06:55:03.468452117Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.678890578s" Jul 2 06:55:03.468743 containerd[1497]: time="2024-07-02T06:55:03.468501962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 06:55:03.498819 containerd[1497]: time="2024-07-02T06:55:03.498768597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 06:55:05.104708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240608483.mount: Deactivated successfully. Jul 2 06:55:05.733066 containerd[1497]: time="2024-07-02T06:55:05.732966337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:05.734837 containerd[1497]: time="2024-07-02T06:55:05.734592744Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118427" Jul 2 06:55:05.735758 containerd[1497]: time="2024-07-02T06:55:05.735717181Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:05.739039 containerd[1497]: time="2024-07-02T06:55:05.738543318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:05.739715 containerd[1497]: time="2024-07-02T06:55:05.739675388Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.240852396s" Jul 2 06:55:05.739789 containerd[1497]: time="2024-07-02T06:55:05.739720665Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 06:55:05.767654 containerd[1497]: time="2024-07-02T06:55:05.767533208Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 06:55:06.426148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871663769.mount: Deactivated successfully. Jul 2 06:55:06.435792 containerd[1497]: time="2024-07-02T06:55:06.435709726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:06.436926 containerd[1497]: time="2024-07-02T06:55:06.436879895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jul 2 06:55:06.438171 containerd[1497]: time="2024-07-02T06:55:06.437746989Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:06.440861 containerd[1497]: time="2024-07-02T06:55:06.440820264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:06.442107 containerd[1497]: time="2024-07-02T06:55:06.442065560Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 674.192942ms" Jul 2 06:55:06.442194 containerd[1497]: time="2024-07-02T06:55:06.442124709Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 06:55:06.473647 containerd[1497]: time="2024-07-02T06:55:06.473581299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 06:55:07.185366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459380137.mount: Deactivated successfully. Jul 2 06:55:10.875025 containerd[1497]: time="2024-07-02T06:55:10.874935042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:10.876552 containerd[1497]: time="2024-07-02T06:55:10.876497775Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jul 2 06:55:10.878264 containerd[1497]: time="2024-07-02T06:55:10.877385071Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:10.881958 containerd[1497]: time="2024-07-02T06:55:10.881913695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:10.883783 containerd[1497]: time="2024-07-02T06:55:10.883731891Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.409821158s" Jul 2 06:55:10.883934 containerd[1497]: time="2024-07-02T06:55:10.883904202Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 06:55:10.924688 containerd[1497]: time="2024-07-02T06:55:10.924608924Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 06:55:11.368965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 06:55:11.379858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:11.639922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:11.676342 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 06:55:11.811162 kubelet[2088]: E0702 06:55:11.810378 2088 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:55:11.818166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:55:11.818394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:55:11.821976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569712843.mount: Deactivated successfully. Jul 2 06:55:12.171456 update_engine[1481]: I0702 06:55:12.167599 1481 update_attempter.cc:509] Updating boot flags... Jul 2 06:55:12.417302 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2110) Jul 2 06:55:12.561482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2104) Jul 2 06:55:13.128204 containerd[1497]: time="2024-07-02T06:55:13.127709094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:13.130344 containerd[1497]: time="2024-07-02T06:55:13.130068787Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Jul 2 06:55:13.130993 containerd[1497]: time="2024-07-02T06:55:13.130952756Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:13.134414 containerd[1497]: time="2024-07-02T06:55:13.134337227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:13.136368 containerd[1497]: time="2024-07-02T06:55:13.135717039Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 2.211047527s" Jul 2 06:55:13.136368 containerd[1497]: time="2024-07-02T06:55:13.135763971Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 06:55:17.602936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:17.610757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:17.655132 systemd[1]: Reloading requested from client PID 2179 ('systemctl') (unit session-11.scope)... Jul 2 06:55:17.655178 systemd[1]: Reloading... Jul 2 06:55:17.811160 zram_generator::config[2216]: No configuration found. Jul 2 06:55:17.983420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:55:18.092540 systemd[1]: Reloading finished in 436 ms. Jul 2 06:55:18.169327 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 06:55:18.169709 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 06:55:18.170225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:18.178976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:18.315280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:18.330198 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 06:55:18.403494 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:55:18.403494 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:55:18.403494 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:55:18.403494 kubelet[2284]: I0702 06:55:18.403066 2284 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:55:19.396070 kubelet[2284]: I0702 06:55:19.396016 2284 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 06:55:19.396070 kubelet[2284]: I0702 06:55:19.396058 2284 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:55:19.396440 kubelet[2284]: I0702 06:55:19.396394 2284 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 06:55:19.426139 kubelet[2284]: I0702 06:55:19.425734 2284 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:55:19.426870 kubelet[2284]: E0702 06:55:19.426813 2284 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.24.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.446907 kubelet[2284]: I0702 06:55:19.446836 2284 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:55:19.447365 kubelet[2284]: I0702 06:55:19.447328 2284 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:55:19.447659 kubelet[2284]: I0702 06:55:19.447618 2284 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:55:19.448208 kubelet[2284]: I0702 06:55:19.448171 2284 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:55:19.448208 kubelet[2284]: I0702 06:55:19.448206 2284 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:55:19.449095 kubelet[2284]: I0702 06:55:19.449054 2284 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:55:19.450630 kubelet[2284]: I0702 06:55:19.450596 2284 kubelet.go:393] "Attempting to sync node with API server" Jul 2 06:55:19.450725 kubelet[2284]: I0702 06:55:19.450638 2284 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:55:19.450725 kubelet[2284]: I0702 06:55:19.450708 2284 kubelet.go:309] "Adding apiserver pod source" Jul 2 06:55:19.450814 kubelet[2284]: I0702 06:55:19.450747 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:55:19.452769 kubelet[2284]: W0702 06:55:19.452681 2284 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.244.24.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.452769 kubelet[2284]: E0702 06:55:19.452760 2284 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.24.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.453169 kubelet[2284]: W0702 06:55:19.453131 2284 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.244.24.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-5ya4d.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.453246 kubelet[2284]: E0702 06:55:19.453176 2284 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.24.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-5ya4d.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.453576 kubelet[2284]: I0702 06:55:19.453547 2284 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 06:55:19.457879 kubelet[2284]: W0702 06:55:19.457832 2284 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 06:55:19.458759 kubelet[2284]: I0702 06:55:19.458731 2284 server.go:1232] "Started kubelet" Jul 2 06:55:19.458986 kubelet[2284]: I0702 06:55:19.458962 2284 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:55:19.459143 kubelet[2284]: I0702 06:55:19.459115 2284 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 06:55:19.459835 kubelet[2284]: I0702 06:55:19.459803 2284 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:55:19.460733 kubelet[2284]: I0702 06:55:19.460709 2284 server.go:462] "Adding debug handlers to kubelet server" Jul 2 06:55:19.463272 kubelet[2284]: E0702 06:55:19.462790 2284 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"srv-5ya4d.gb1.brightbox.com.17de52f4e459c960", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"srv-5ya4d.gb1.brightbox.com", UID:"srv-5ya4d.gb1.brightbox.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"srv-5ya4d.gb1.brightbox.com"}, FirstTimestamp:time.Date(2024, time.July, 2, 6, 55, 19, 458695520, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 6, 55, 19, 458695520, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"srv-5ya4d.gb1.brightbox.com"}': 'Post "https://10.244.24.146:6443/api/v1/namespaces/default/events": dial tcp 10.244.24.146:6443: connect: connection refused'(may retry after sleeping) Jul 2 06:55:19.465757 kubelet[2284]: E0702 06:55:19.464658 2284 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 06:55:19.465757 kubelet[2284]: E0702 06:55:19.464713 2284 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:55:19.466893 kubelet[2284]: I0702 06:55:19.465977 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:55:19.469240 kubelet[2284]: I0702 06:55:19.469038 2284 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:55:19.469240 kubelet[2284]: I0702 06:55:19.469173 2284 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 06:55:19.469384 kubelet[2284]: I0702 06:55:19.469298 2284 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 06:55:19.472304 kubelet[2284]: W0702 06:55:19.469773 2284 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.244.24.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.472304 kubelet[2284]: E0702 06:55:19.469848 2284 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.24.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.472304 kubelet[2284]: E0702 06:55:19.470193 2284 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.24.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-5ya4d.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.24.146:6443: connect: connection refused" interval="200ms" Jul 2 06:55:19.509818 kubelet[2284]: I0702 06:55:19.509776 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:55:19.514252 kubelet[2284]: I0702 06:55:19.514207 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:55:19.514407 kubelet[2284]: I0702 06:55:19.514264 2284 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:55:19.514407 kubelet[2284]: I0702 06:55:19.514303 2284 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 06:55:19.514407 kubelet[2284]: E0702 06:55:19.514402 2284 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:55:19.524065 kubelet[2284]: W0702 06:55:19.523873 2284 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.244.24.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.524065 kubelet[2284]: E0702 06:55:19.523947 2284 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.24.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:19.540344 kubelet[2284]: I0702 06:55:19.540252 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:55:19.540344 kubelet[2284]: I0702 06:55:19.540284 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:55:19.540344 kubelet[2284]: I0702 06:55:19.540318 2284 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:55:19.542267 kubelet[2284]: I0702 06:55:19.542224 2284 policy_none.go:49] "None policy: Start" Jul 2 06:55:19.543017 kubelet[2284]: I0702 06:55:19.542994 2284 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 06:55:19.543090 kubelet[2284]: I0702 06:55:19.543039 2284 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:55:19.551984 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 06:55:19.562359 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 06:55:19.568270 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 06:55:19.572570 kubelet[2284]: I0702 06:55:19.572540 2284 kubelet_node_status.go:70] "Attempting to register node" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.573629 kubelet[2284]: E0702 06:55:19.573591 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.244.24.146:6443/api/v1/nodes\": dial tcp 10.244.24.146:6443: connect: connection refused" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.579946 kubelet[2284]: I0702 06:55:19.579922 2284 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:55:19.581338 kubelet[2284]: I0702 06:55:19.580766 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:55:19.583512 kubelet[2284]: E0702 06:55:19.583489 2284 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-5ya4d.gb1.brightbox.com\" not found" Jul 2 06:55:19.614933 kubelet[2284]: I0702 06:55:19.614877 2284 topology_manager.go:215] "Topology Admit Handler" podUID="0bf379c4fae37c46e2f648b0a21ed5b1" podNamespace="kube-system" podName="kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.617985 kubelet[2284]: I0702 06:55:19.617721 2284 topology_manager.go:215] "Topology Admit Handler" podUID="6c3775f991e3ad0277f51ff16709dc26" podNamespace="kube-system" podName="kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.620609 kubelet[2284]: I0702 06:55:19.620583 2284 topology_manager.go:215] "Topology Admit Handler" podUID="3b8131b8885ebab3d650e1c591761a82" podNamespace="kube-system" podName="kube-scheduler-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.629582 systemd[1]: Created slice kubepods-burstable-pod0bf379c4fae37c46e2f648b0a21ed5b1.slice - libcontainer container kubepods-burstable-pod0bf379c4fae37c46e2f648b0a21ed5b1.slice. Jul 2 06:55:19.648443 systemd[1]: Created slice kubepods-burstable-pod6c3775f991e3ad0277f51ff16709dc26.slice - libcontainer container kubepods-burstable-pod6c3775f991e3ad0277f51ff16709dc26.slice. Jul 2 06:55:19.656050 systemd[1]: Created slice kubepods-burstable-pod3b8131b8885ebab3d650e1c591761a82.slice - libcontainer container kubepods-burstable-pod3b8131b8885ebab3d650e1c591761a82.slice. Jul 2 06:55:19.671483 kubelet[2284]: I0702 06:55:19.670910 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bf379c4fae37c46e2f648b0a21ed5b1-k8s-certs\") pod \"kube-apiserver-srv-5ya4d.gb1.brightbox.com\" (UID: \"0bf379c4fae37c46e2f648b0a21ed5b1\") " pod="kube-system/kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.671483 kubelet[2284]: I0702 06:55:19.670978 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bf379c4fae37c46e2f648b0a21ed5b1-usr-share-ca-certificates\") pod \"kube-apiserver-srv-5ya4d.gb1.brightbox.com\" (UID: \"0bf379c4fae37c46e2f648b0a21ed5b1\") " pod="kube-system/kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.671483 kubelet[2284]: I0702 06:55:19.671019 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-flexvolume-dir\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.671483 kubelet[2284]: I0702 06:55:19.671077 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b8131b8885ebab3d650e1c591761a82-kubeconfig\") pod \"kube-scheduler-srv-5ya4d.gb1.brightbox.com\" (UID: \"3b8131b8885ebab3d650e1c591761a82\") " pod="kube-system/kube-scheduler-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.671483 kubelet[2284]: I0702 06:55:19.671117 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bf379c4fae37c46e2f648b0a21ed5b1-ca-certs\") pod \"kube-apiserver-srv-5ya4d.gb1.brightbox.com\" (UID: \"0bf379c4fae37c46e2f648b0a21ed5b1\") " pod="kube-system/kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.671858 kubelet[2284]: I0702 06:55:19.671151 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-ca-certs\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.671858 kubelet[2284]: I0702 06:55:19.671183 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-k8s-certs\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.671858 kubelet[2284]: I0702 06:55:19.671214 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-kubeconfig\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.671858 kubelet[2284]: E0702 06:55:19.671241 2284 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.24.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-5ya4d.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.24.146:6443: connect: connection refused" interval="400ms" Jul 2 06:55:19.671858 kubelet[2284]: I0702 06:55:19.671248 2284 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.777308 kubelet[2284]: I0702 06:55:19.776855 2284 kubelet_node_status.go:70] "Attempting to register node" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.777308 kubelet[2284]: E0702 06:55:19.777274 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.244.24.146:6443/api/v1/nodes\": dial tcp 10.244.24.146:6443: connect: connection refused" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:19.945288 containerd[1497]: time="2024-07-02T06:55:19.945138486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-5ya4d.gb1.brightbox.com,Uid:0bf379c4fae37c46e2f648b0a21ed5b1,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:19.959395 containerd[1497]: time="2024-07-02T06:55:19.959072262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-5ya4d.gb1.brightbox.com,Uid:6c3775f991e3ad0277f51ff16709dc26,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:19.959920 containerd[1497]: time="2024-07-02T06:55:19.959667042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-5ya4d.gb1.brightbox.com,Uid:3b8131b8885ebab3d650e1c591761a82,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:20.072368 kubelet[2284]: E0702 06:55:20.072312 2284 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.24.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-5ya4d.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.24.146:6443: connect: connection refused" interval="800ms" Jul 2 06:55:20.137655 kubelet[2284]: E0702 06:55:20.137512 2284 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"srv-5ya4d.gb1.brightbox.com.17de52f4e459c960", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"srv-5ya4d.gb1.brightbox.com", UID:"srv-5ya4d.gb1.brightbox.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"srv-5ya4d.gb1.brightbox.com"}, FirstTimestamp:time.Date(2024, time.July, 2, 6, 55, 19, 458695520, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 6, 55, 19, 458695520, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"srv-5ya4d.gb1.brightbox.com"}': 'Post "https://10.244.24.146:6443/api/v1/namespaces/default/events": dial tcp 10.244.24.146:6443: connect: connection refused'(may retry after sleeping) Jul 2 06:55:20.181000 kubelet[2284]: I0702 06:55:20.180601 2284 kubelet_node_status.go:70] "Attempting to register node" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:20.181163 kubelet[2284]: E0702 06:55:20.181082 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.244.24.146:6443/api/v1/nodes\": dial tcp 10.244.24.146:6443: connect: connection refused" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:20.546497 kubelet[2284]: W0702 06:55:20.546249 2284 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.244.24.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:20.546497 kubelet[2284]: E0702 06:55:20.546358 2284 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.24.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:20.662051 kubelet[2284]: W0702 06:55:20.661984 2284 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.244.24.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:20.662051 kubelet[2284]: E0702 06:55:20.662048 2284 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.24.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:20.703660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941395636.mount: Deactivated successfully. Jul 2 06:55:20.713464 containerd[1497]: time="2024-07-02T06:55:20.712037454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:20.713954 containerd[1497]: time="2024-07-02T06:55:20.713919583Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:20.715354 containerd[1497]: time="2024-07-02T06:55:20.715304452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:55:20.715834 containerd[1497]: time="2024-07-02T06:55:20.715795154Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:55:20.716830 containerd[1497]: time="2024-07-02T06:55:20.716795676Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:20.718617 containerd[1497]: time="2024-07-02T06:55:20.718579472Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:20.718803 containerd[1497]: time="2024-07-02T06:55:20.718753617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 2 06:55:20.724155 containerd[1497]: time="2024-07-02T06:55:20.724081333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:55:20.726641 containerd[1497]: time="2024-07-02T06:55:20.726594206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 766.543448ms" Jul 2 06:55:20.729478 containerd[1497]: time="2024-07-02T06:55:20.729098830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 783.793664ms" Jul 2 06:55:20.733474 containerd[1497]: time="2024-07-02T06:55:20.733234036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 772.948619ms" Jul 2 06:55:20.874274 kubelet[2284]: E0702 06:55:20.874088 2284 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.24.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-5ya4d.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.24.146:6443: connect: connection refused" interval="1.6s" Jul 2 06:55:20.960708 kubelet[2284]: W0702 06:55:20.949397 2284 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.244.24.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-5ya4d.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:20.960708 kubelet[2284]: E0702 06:55:20.949522 2284 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.24.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-5ya4d.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:20.981939 containerd[1497]: time="2024-07-02T06:55:20.979790147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:20.981939 containerd[1497]: time="2024-07-02T06:55:20.981697352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:20.981939 containerd[1497]: time="2024-07-02T06:55:20.981746658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:20.981939 containerd[1497]: time="2024-07-02T06:55:20.981763589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:20.987703 kubelet[2284]: I0702 06:55:20.985982 2284 kubelet_node_status.go:70] "Attempting to register node" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:20.987703 kubelet[2284]: E0702 06:55:20.986414 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.244.24.146:6443/api/v1/nodes\": dial tcp 10.244.24.146:6443: connect: connection refused" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:20.988767 containerd[1497]: time="2024-07-02T06:55:20.988543280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:20.988985 containerd[1497]: time="2024-07-02T06:55:20.988916924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:20.989247 containerd[1497]: time="2024-07-02T06:55:20.989042512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:20.989247 containerd[1497]: time="2024-07-02T06:55:20.989073553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:20.994625 containerd[1497]: time="2024-07-02T06:55:20.994256148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:20.994625 containerd[1497]: time="2024-07-02T06:55:20.994324531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:20.994625 containerd[1497]: time="2024-07-02T06:55:20.994349088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:20.994625 containerd[1497]: time="2024-07-02T06:55:20.994364672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:21.031688 systemd[1]: Started cri-containerd-054c405bd17da8cc16f7e092b3a730685299d62f2bee437ace38993c0393af24.scope - libcontainer container 054c405bd17da8cc16f7e092b3a730685299d62f2bee437ace38993c0393af24. Jul 2 06:55:21.044720 systemd[1]: Started cri-containerd-9c1134233601b27694116bd984ae45f54c7da007bd635343a0a5a753a2bde252.scope - libcontainer container 9c1134233601b27694116bd984ae45f54c7da007bd635343a0a5a753a2bde252. Jul 2 06:55:21.053673 systemd[1]: Started cri-containerd-2d3793e9644de3eaa14c42beece15d5cd51991cfa2f2133e530ebff5ba0d1797.scope - libcontainer container 2d3793e9644de3eaa14c42beece15d5cd51991cfa2f2133e530ebff5ba0d1797. Jul 2 06:55:21.055062 kubelet[2284]: W0702 06:55:21.054775 2284 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.244.24.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:21.055062 kubelet[2284]: E0702 06:55:21.054853 2284 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.24.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:21.171730 containerd[1497]: time="2024-07-02T06:55:21.171663013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-5ya4d.gb1.brightbox.com,Uid:0bf379c4fae37c46e2f648b0a21ed5b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"054c405bd17da8cc16f7e092b3a730685299d62f2bee437ace38993c0393af24\"" Jul 2 06:55:21.181680 containerd[1497]: time="2024-07-02T06:55:21.181632837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-5ya4d.gb1.brightbox.com,Uid:6c3775f991e3ad0277f51ff16709dc26,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c1134233601b27694116bd984ae45f54c7da007bd635343a0a5a753a2bde252\"" Jul 2 06:55:21.182764 containerd[1497]: time="2024-07-02T06:55:21.182497250Z" level=info msg="CreateContainer within sandbox \"054c405bd17da8cc16f7e092b3a730685299d62f2bee437ace38993c0393af24\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 06:55:21.187795 containerd[1497]: time="2024-07-02T06:55:21.187757186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-5ya4d.gb1.brightbox.com,Uid:3b8131b8885ebab3d650e1c591761a82,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d3793e9644de3eaa14c42beece15d5cd51991cfa2f2133e530ebff5ba0d1797\"" Jul 2 06:55:21.191043 containerd[1497]: time="2024-07-02T06:55:21.191000225Z" level=info msg="CreateContainer within sandbox \"9c1134233601b27694116bd984ae45f54c7da007bd635343a0a5a753a2bde252\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 06:55:21.193013 containerd[1497]: time="2024-07-02T06:55:21.192897623Z" level=info msg="CreateContainer within sandbox \"2d3793e9644de3eaa14c42beece15d5cd51991cfa2f2133e530ebff5ba0d1797\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 06:55:21.223071 containerd[1497]: time="2024-07-02T06:55:21.222999828Z" level=info msg="CreateContainer within sandbox \"054c405bd17da8cc16f7e092b3a730685299d62f2bee437ace38993c0393af24\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cc4fb6aa0b632c2ca2f7d3eb327a16ef65a56029582d45058c614f4c67614cd8\"" Jul 2 06:55:21.224209 containerd[1497]: time="2024-07-02T06:55:21.224162015Z" level=info msg="StartContainer for \"cc4fb6aa0b632c2ca2f7d3eb327a16ef65a56029582d45058c614f4c67614cd8\"" Jul 2 06:55:21.228159 containerd[1497]: time="2024-07-02T06:55:21.227947134Z" level=info msg="CreateContainer within sandbox \"9c1134233601b27694116bd984ae45f54c7da007bd635343a0a5a753a2bde252\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f46e40e2d74f300c12585d5c16653ad667dbd816bf9a7bc476421f23b7f540df\"" Jul 2 06:55:21.229859 containerd[1497]: time="2024-07-02T06:55:21.229824163Z" level=info msg="CreateContainer within sandbox \"2d3793e9644de3eaa14c42beece15d5cd51991cfa2f2133e530ebff5ba0d1797\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"18a70ee0d4953c197f40fc1e25e1530314fd70d8259408b3a33d118cba1fd208\"" Jul 2 06:55:21.230194 containerd[1497]: time="2024-07-02T06:55:21.230155438Z" level=info msg="StartContainer for \"f46e40e2d74f300c12585d5c16653ad667dbd816bf9a7bc476421f23b7f540df\"" Jul 2 06:55:21.232893 containerd[1497]: time="2024-07-02T06:55:21.231561636Z" level=info msg="StartContainer for \"18a70ee0d4953c197f40fc1e25e1530314fd70d8259408b3a33d118cba1fd208\"" Jul 2 06:55:21.284624 systemd[1]: Started cri-containerd-f46e40e2d74f300c12585d5c16653ad667dbd816bf9a7bc476421f23b7f540df.scope - libcontainer container f46e40e2d74f300c12585d5c16653ad667dbd816bf9a7bc476421f23b7f540df. Jul 2 06:55:21.293205 systemd[1]: Started cri-containerd-18a70ee0d4953c197f40fc1e25e1530314fd70d8259408b3a33d118cba1fd208.scope - libcontainer container 18a70ee0d4953c197f40fc1e25e1530314fd70d8259408b3a33d118cba1fd208. Jul 2 06:55:21.306648 systemd[1]: Started cri-containerd-cc4fb6aa0b632c2ca2f7d3eb327a16ef65a56029582d45058c614f4c67614cd8.scope - libcontainer container cc4fb6aa0b632c2ca2f7d3eb327a16ef65a56029582d45058c614f4c67614cd8. Jul 2 06:55:21.420911 containerd[1497]: time="2024-07-02T06:55:21.420854173Z" level=info msg="StartContainer for \"cc4fb6aa0b632c2ca2f7d3eb327a16ef65a56029582d45058c614f4c67614cd8\" returns successfully" Jul 2 06:55:21.423692 containerd[1497]: time="2024-07-02T06:55:21.423559711Z" level=info msg="StartContainer for \"f46e40e2d74f300c12585d5c16653ad667dbd816bf9a7bc476421f23b7f540df\" returns successfully" Jul 2 06:55:21.436452 containerd[1497]: time="2024-07-02T06:55:21.436383831Z" level=info msg="StartContainer for \"18a70ee0d4953c197f40fc1e25e1530314fd70d8259408b3a33d118cba1fd208\" returns successfully" Jul 2 06:55:21.463454 kubelet[2284]: E0702 06:55:21.461969 2284 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.24.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.24.146:6443: connect: connection refused Jul 2 06:55:22.591526 kubelet[2284]: I0702 06:55:22.589794 2284 kubelet_node_status.go:70] "Attempting to register node" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:25.108520 kubelet[2284]: E0702 06:55:25.108458 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-5ya4d.gb1.brightbox.com\" not found" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:25.170740 kubelet[2284]: I0702 06:55:25.170581 2284 kubelet_node_status.go:73] "Successfully registered node" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:25.455333 kubelet[2284]: I0702 06:55:25.455202 2284 apiserver.go:52] "Watching apiserver" Jul 2 06:55:25.469846 kubelet[2284]: I0702 06:55:25.469750 2284 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 06:55:28.103335 systemd[1]: Reloading requested from client PID 2557 ('systemctl') (unit session-11.scope)... Jul 2 06:55:28.103387 systemd[1]: Reloading... Jul 2 06:55:28.259644 zram_generator::config[2594]: No configuration found. Jul 2 06:55:28.479535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:55:28.620683 systemd[1]: Reloading finished in 515 ms. Jul 2 06:55:28.693258 kubelet[2284]: I0702 06:55:28.693161 2284 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:55:28.693748 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:28.712258 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 06:55:28.712797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:28.712942 systemd[1]: kubelet.service: Consumed 1.685s CPU time, 111.4M memory peak, 0B memory swap peak. Jul 2 06:55:28.721934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:28.917456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:28.930001 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 06:55:29.066806 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:55:29.066806 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:55:29.066806 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:55:29.066806 kubelet[2658]: I0702 06:55:29.066302 2658 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:55:29.081324 kubelet[2658]: I0702 06:55:29.081260 2658 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 06:55:29.081324 kubelet[2658]: I0702 06:55:29.081321 2658 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:55:29.081843 kubelet[2658]: I0702 06:55:29.081815 2658 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 06:55:29.085416 kubelet[2658]: I0702 06:55:29.085090 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 06:55:29.091500 kubelet[2658]: I0702 06:55:29.091240 2658 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:55:29.107169 kubelet[2658]: I0702 06:55:29.107123 2658 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:55:29.108724 kubelet[2658]: I0702 06:55:29.108167 2658 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:55:29.108724 kubelet[2658]: I0702 06:55:29.108448 2658 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:55:29.108724 kubelet[2658]: I0702 06:55:29.108490 2658 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:55:29.108724 kubelet[2658]: I0702 06:55:29.108508 2658 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:55:29.108724 kubelet[2658]: I0702 06:55:29.108578 2658 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:55:29.109349 kubelet[2658]: I0702 06:55:29.109328 2658 kubelet.go:393] "Attempting to sync node with API server" Jul 2 06:55:29.109753 kubelet[2658]: I0702 06:55:29.109708 2658 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:55:29.110011 kubelet[2658]: I0702 06:55:29.109965 2658 kubelet.go:309] "Adding apiserver pod source" Jul 2 06:55:29.115486 kubelet[2658]: I0702 06:55:29.113813 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:55:29.125104 sudo[2671]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 06:55:29.125668 sudo[2671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 06:55:29.144785 kubelet[2658]: I0702 06:55:29.143506 2658 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 06:55:29.144785 kubelet[2658]: I0702 06:55:29.144639 2658 server.go:1232] "Started kubelet" Jul 2 06:55:29.149329 kubelet[2658]: I0702 06:55:29.149254 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:55:29.152460 kubelet[2658]: E0702 06:55:29.151654 2658 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 06:55:29.152825 kubelet[2658]: E0702 06:55:29.152772 2658 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:55:29.161632 kubelet[2658]: I0702 06:55:29.161579 2658 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:55:29.163005 kubelet[2658]: I0702 06:55:29.162302 2658 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 06:55:29.163005 kubelet[2658]: I0702 06:55:29.162662 2658 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 06:55:29.165811 kubelet[2658]: I0702 06:55:29.164236 2658 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:55:29.173561 kubelet[2658]: I0702 06:55:29.171598 2658 server.go:462] "Adding debug handlers to kubelet server" Jul 2 06:55:29.182750 kubelet[2658]: I0702 06:55:29.178245 2658 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 06:55:29.182750 kubelet[2658]: I0702 06:55:29.180858 2658 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:55:29.182750 kubelet[2658]: I0702 06:55:29.178834 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:55:29.190478 kubelet[2658]: I0702 06:55:29.190022 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:55:29.190478 kubelet[2658]: I0702 06:55:29.190077 2658 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:55:29.190478 kubelet[2658]: I0702 06:55:29.190124 2658 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 06:55:29.190478 kubelet[2658]: E0702 06:55:29.190216 2658 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:55:29.291568 kubelet[2658]: E0702 06:55:29.290840 2658 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 06:55:29.295546 kubelet[2658]: I0702 06:55:29.295464 2658 kubelet_node_status.go:70] "Attempting to register node" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.324176 kubelet[2658]: I0702 06:55:29.322768 2658 kubelet_node_status.go:108] "Node was previously registered" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.324176 kubelet[2658]: I0702 06:55:29.323000 2658 kubelet_node_status.go:73] "Successfully registered node" node="srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.379128 kubelet[2658]: I0702 06:55:29.377313 2658 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:55:29.379128 kubelet[2658]: I0702 06:55:29.377357 2658 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:55:29.379128 kubelet[2658]: I0702 06:55:29.377469 2658 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:55:29.379128 kubelet[2658]: I0702 06:55:29.377844 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 06:55:29.379128 kubelet[2658]: I0702 06:55:29.377890 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 06:55:29.379128 kubelet[2658]: I0702 06:55:29.378325 2658 policy_none.go:49] "None policy: Start" Jul 2 06:55:29.383771 kubelet[2658]: I0702 06:55:29.382526 2658 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 06:55:29.383771 kubelet[2658]: I0702 06:55:29.382565 2658 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:55:29.383771 kubelet[2658]: I0702 06:55:29.382842 2658 state_mem.go:75] "Updated machine memory state" Jul 2 06:55:29.401910 kubelet[2658]: I0702 06:55:29.401573 2658 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:55:29.403116 kubelet[2658]: I0702 06:55:29.402705 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:55:29.492661 kubelet[2658]: I0702 06:55:29.491573 2658 topology_manager.go:215] "Topology Admit Handler" podUID="0bf379c4fae37c46e2f648b0a21ed5b1" podNamespace="kube-system" podName="kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.492661 kubelet[2658]: I0702 06:55:29.491768 2658 topology_manager.go:215] "Topology Admit Handler" podUID="6c3775f991e3ad0277f51ff16709dc26" podNamespace="kube-system" podName="kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.492661 kubelet[2658]: I0702 06:55:29.491839 2658 topology_manager.go:215] "Topology Admit Handler" podUID="3b8131b8885ebab3d650e1c591761a82" podNamespace="kube-system" podName="kube-scheduler-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.509572 kubelet[2658]: W0702 06:55:29.509534 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 06:55:29.509779 kubelet[2658]: W0702 06:55:29.509751 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 06:55:29.511965 kubelet[2658]: W0702 06:55:29.511473 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 06:55:29.568297 kubelet[2658]: I0702 06:55:29.567755 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bf379c4fae37c46e2f648b0a21ed5b1-ca-certs\") pod \"kube-apiserver-srv-5ya4d.gb1.brightbox.com\" (UID: \"0bf379c4fae37c46e2f648b0a21ed5b1\") " pod="kube-system/kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.568297 kubelet[2658]: I0702 06:55:29.567826 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bf379c4fae37c46e2f648b0a21ed5b1-k8s-certs\") pod \"kube-apiserver-srv-5ya4d.gb1.brightbox.com\" (UID: \"0bf379c4fae37c46e2f648b0a21ed5b1\") " pod="kube-system/kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.668639 kubelet[2658]: I0702 06:55:29.668567 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b8131b8885ebab3d650e1c591761a82-kubeconfig\") pod \"kube-scheduler-srv-5ya4d.gb1.brightbox.com\" (UID: \"3b8131b8885ebab3d650e1c591761a82\") " pod="kube-system/kube-scheduler-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.668827 kubelet[2658]: I0702 06:55:29.668693 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bf379c4fae37c46e2f648b0a21ed5b1-usr-share-ca-certificates\") pod \"kube-apiserver-srv-5ya4d.gb1.brightbox.com\" (UID: \"0bf379c4fae37c46e2f648b0a21ed5b1\") " pod="kube-system/kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.668827 kubelet[2658]: I0702 06:55:29.668736 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-flexvolume-dir\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.668827 kubelet[2658]: I0702 06:55:29.668774 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-kubeconfig\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.668827 kubelet[2658]: I0702 06:55:29.668811 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.669037 kubelet[2658]: I0702 06:55:29.668858 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-ca-certs\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.669037 kubelet[2658]: I0702 06:55:29.668893 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c3775f991e3ad0277f51ff16709dc26-k8s-certs\") pod \"kube-controller-manager-srv-5ya4d.gb1.brightbox.com\" (UID: \"6c3775f991e3ad0277f51ff16709dc26\") " pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:29.986203 sudo[2671]: pam_unix(sudo:session): session closed for user root Jul 2 06:55:30.122127 kubelet[2658]: I0702 06:55:30.122059 2658 apiserver.go:52] "Watching apiserver" Jul 2 06:55:30.162503 kubelet[2658]: I0702 06:55:30.162451 2658 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 06:55:30.281871 kubelet[2658]: W0702 06:55:30.281736 2658 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 06:55:30.283251 kubelet[2658]: E0702 06:55:30.283220 2658 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-5ya4d.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-5ya4d.gb1.brightbox.com" Jul 2 06:55:30.311887 kubelet[2658]: I0702 06:55:30.310548 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-5ya4d.gb1.brightbox.com" podStartSLOduration=1.310468919 podCreationTimestamp="2024-07-02 06:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:30.308626756 +0000 UTC m=+1.347393385" watchObservedRunningTime="2024-07-02 06:55:30.310468919 +0000 UTC m=+1.349235535" Jul 2 06:55:30.335840 kubelet[2658]: I0702 06:55:30.335714 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-5ya4d.gb1.brightbox.com" podStartSLOduration=1.335667942 podCreationTimestamp="2024-07-02 06:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:30.322649714 +0000 UTC m=+1.361416355" watchObservedRunningTime="2024-07-02 06:55:30.335667942 +0000 UTC m=+1.374434574" Jul 2 06:55:30.350204 kubelet[2658]: I0702 06:55:30.350090 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-5ya4d.gb1.brightbox.com" podStartSLOduration=1.350047926 podCreationTimestamp="2024-07-02 06:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:30.336776231 +0000 UTC m=+1.375542862" watchObservedRunningTime="2024-07-02 06:55:30.350047926 +0000 UTC m=+1.388814551" Jul 2 06:55:31.699004 sudo[1767]: pam_unix(sudo:session): session closed for user root Jul 2 06:55:31.841944 sshd[1764]: pam_unix(sshd:session): session closed for user core Jul 2 06:55:31.846965 systemd[1]: sshd@8-10.244.24.146:22-139.178.89.65:60850.service: Deactivated successfully. Jul 2 06:55:31.849894 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 06:55:31.850392 systemd[1]: session-11.scope: Consumed 6.775s CPU time, 133.3M memory peak, 0B memory swap peak. Jul 2 06:55:31.851984 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Jul 2 06:55:31.854003 systemd-logind[1479]: Removed session 11. Jul 2 06:55:40.118199 kubelet[2658]: I0702 06:55:40.118116 2658 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 06:55:40.120638 containerd[1497]: time="2024-07-02T06:55:40.119904684Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 06:55:40.121806 kubelet[2658]: I0702 06:55:40.121722 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 06:55:40.892238 kubelet[2658]: I0702 06:55:40.892169 2658 topology_manager.go:215] "Topology Admit Handler" podUID="ce9d0655-cf4a-476a-bdc1-ca9236e47bc1" podNamespace="kube-system" podName="kube-proxy-c7bbj" Jul 2 06:55:40.914307 systemd[1]: Created slice kubepods-besteffort-podce9d0655_cf4a_476a_bdc1_ca9236e47bc1.slice - libcontainer container kubepods-besteffort-podce9d0655_cf4a_476a_bdc1_ca9236e47bc1.slice. Jul 2 06:55:40.946117 kubelet[2658]: I0702 06:55:40.945753 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce9d0655-cf4a-476a-bdc1-ca9236e47bc1-xtables-lock\") pod \"kube-proxy-c7bbj\" (UID: \"ce9d0655-cf4a-476a-bdc1-ca9236e47bc1\") " pod="kube-system/kube-proxy-c7bbj" Jul 2 06:55:40.946117 kubelet[2658]: I0702 06:55:40.945851 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce9d0655-cf4a-476a-bdc1-ca9236e47bc1-lib-modules\") pod \"kube-proxy-c7bbj\" (UID: \"ce9d0655-cf4a-476a-bdc1-ca9236e47bc1\") " pod="kube-system/kube-proxy-c7bbj" Jul 2 06:55:40.946117 kubelet[2658]: I0702 06:55:40.945917 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce9d0655-cf4a-476a-bdc1-ca9236e47bc1-kube-proxy\") pod \"kube-proxy-c7bbj\" (UID: \"ce9d0655-cf4a-476a-bdc1-ca9236e47bc1\") " pod="kube-system/kube-proxy-c7bbj" Jul 2 06:55:40.946117 kubelet[2658]: I0702 06:55:40.945963 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kj4d\" (UniqueName: \"kubernetes.io/projected/ce9d0655-cf4a-476a-bdc1-ca9236e47bc1-kube-api-access-2kj4d\") pod \"kube-proxy-c7bbj\" (UID: \"ce9d0655-cf4a-476a-bdc1-ca9236e47bc1\") " pod="kube-system/kube-proxy-c7bbj" Jul 2 06:55:40.946561 kubelet[2658]: I0702 06:55:40.946337 2658 topology_manager.go:215] "Topology Admit Handler" podUID="44c8ea2a-e670-493e-8fd9-17e6469dd5c4" podNamespace="kube-system" podName="cilium-7xplc" Jul 2 06:55:40.966738 systemd[1]: Created slice kubepods-burstable-pod44c8ea2a_e670_493e_8fd9_17e6469dd5c4.slice - libcontainer container kubepods-burstable-pod44c8ea2a_e670_493e_8fd9_17e6469dd5c4.slice. Jul 2 06:55:41.047889 kubelet[2658]: I0702 06:55:41.047785 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-xtables-lock\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.048177 kubelet[2658]: I0702 06:55:41.047930 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnxx\" (UniqueName: \"kubernetes.io/projected/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-kube-api-access-bmnxx\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.048278 kubelet[2658]: I0702 06:55:41.048253 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cni-path\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.048558 kubelet[2658]: I0702 06:55:41.048530 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-clustermesh-secrets\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.049058 kubelet[2658]: I0702 06:55:41.049013 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-hostproc\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.049299 kubelet[2658]: I0702 06:55:41.049269 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-bpf-maps\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.050447 kubelet[2658]: I0702 06:55:41.049321 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-cgroup\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.050447 kubelet[2658]: I0702 06:55:41.049847 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-etc-cni-netd\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.050447 kubelet[2658]: I0702 06:55:41.049894 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-config-path\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.050447 kubelet[2658]: I0702 06:55:41.050296 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-run\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.050447 kubelet[2658]: I0702 06:55:41.050340 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-host-proc-sys-net\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.054528 kubelet[2658]: I0702 06:55:41.053796 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-host-proc-sys-kernel\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.054528 kubelet[2658]: I0702 06:55:41.054221 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-lib-modules\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.055445 kubelet[2658]: I0702 06:55:41.055058 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-hubble-tls\") pod \"cilium-7xplc\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " pod="kube-system/cilium-7xplc" Jul 2 06:55:41.224198 kubelet[2658]: I0702 06:55:41.223880 2658 topology_manager.go:215] "Topology Admit Handler" podUID="f8b54788-00d3-4831-a3cd-bde068fc8a41" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-dsqhc" Jul 2 06:55:41.231851 containerd[1497]: time="2024-07-02T06:55:41.231766760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7bbj,Uid:ce9d0655-cf4a-476a-bdc1-ca9236e47bc1,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:41.248003 systemd[1]: Created slice kubepods-besteffort-podf8b54788_00d3_4831_a3cd_bde068fc8a41.slice - libcontainer container kubepods-besteffort-podf8b54788_00d3_4831_a3cd_bde068fc8a41.slice. Jul 2 06:55:41.258516 kubelet[2658]: I0702 06:55:41.257349 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8b54788-00d3-4831-a3cd-bde068fc8a41-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-dsqhc\" (UID: \"f8b54788-00d3-4831-a3cd-bde068fc8a41\") " pod="kube-system/cilium-operator-6bc8ccdb58-dsqhc" Jul 2 06:55:41.258516 kubelet[2658]: I0702 06:55:41.257493 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwcf\" (UniqueName: \"kubernetes.io/projected/f8b54788-00d3-4831-a3cd-bde068fc8a41-kube-api-access-7bwcf\") pod \"cilium-operator-6bc8ccdb58-dsqhc\" (UID: \"f8b54788-00d3-4831-a3cd-bde068fc8a41\") " pod="kube-system/cilium-operator-6bc8ccdb58-dsqhc" Jul 2 06:55:41.292688 containerd[1497]: time="2024-07-02T06:55:41.289225909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7xplc,Uid:44c8ea2a-e670-493e-8fd9-17e6469dd5c4,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:41.369601 containerd[1497]: time="2024-07-02T06:55:41.369005263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:41.369601 containerd[1497]: time="2024-07-02T06:55:41.369163860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:41.369601 containerd[1497]: time="2024-07-02T06:55:41.369209801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:41.369601 containerd[1497]: time="2024-07-02T06:55:41.369233842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:41.387724 containerd[1497]: time="2024-07-02T06:55:41.385776332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:41.387724 containerd[1497]: time="2024-07-02T06:55:41.385912302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:41.387724 containerd[1497]: time="2024-07-02T06:55:41.385946852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:41.387724 containerd[1497]: time="2024-07-02T06:55:41.385968563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:41.449341 systemd[1]: Started cri-containerd-6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e.scope - libcontainer container 6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e. Jul 2 06:55:41.467325 systemd[1]: Started cri-containerd-005c3c93a3c5947d267fc00b88cf75e850f3ddbf282f7ca12bc73b411cf0149a.scope - libcontainer container 005c3c93a3c5947d267fc00b88cf75e850f3ddbf282f7ca12bc73b411cf0149a. Jul 2 06:55:41.557251 containerd[1497]: time="2024-07-02T06:55:41.556005379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-dsqhc,Uid:f8b54788-00d3-4831-a3cd-bde068fc8a41,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:41.580812 containerd[1497]: time="2024-07-02T06:55:41.580654668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7xplc,Uid:44c8ea2a-e670-493e-8fd9-17e6469dd5c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\"" Jul 2 06:55:41.592466 containerd[1497]: time="2024-07-02T06:55:41.592131051Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 06:55:41.602094 containerd[1497]: time="2024-07-02T06:55:41.602037168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7bbj,Uid:ce9d0655-cf4a-476a-bdc1-ca9236e47bc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"005c3c93a3c5947d267fc00b88cf75e850f3ddbf282f7ca12bc73b411cf0149a\"" Jul 2 06:55:41.613489 containerd[1497]: time="2024-07-02T06:55:41.613292159Z" level=info msg="CreateContainer within sandbox \"005c3c93a3c5947d267fc00b88cf75e850f3ddbf282f7ca12bc73b411cf0149a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 06:55:41.635005 containerd[1497]: time="2024-07-02T06:55:41.630067132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:55:41.635005 containerd[1497]: time="2024-07-02T06:55:41.634732458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:41.635005 containerd[1497]: time="2024-07-02T06:55:41.634763163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:55:41.635005 containerd[1497]: time="2024-07-02T06:55:41.634838978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:55:41.641179 containerd[1497]: time="2024-07-02T06:55:41.640965401Z" level=info msg="CreateContainer within sandbox \"005c3c93a3c5947d267fc00b88cf75e850f3ddbf282f7ca12bc73b411cf0149a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f8e8673b118fae2a30dea858c940323e835780f1f185fb6c786849ff60bcf71\"" Jul 2 06:55:41.645060 containerd[1497]: time="2024-07-02T06:55:41.644703368Z" level=info msg="StartContainer for \"7f8e8673b118fae2a30dea858c940323e835780f1f185fb6c786849ff60bcf71\"" Jul 2 06:55:41.667743 systemd[1]: Started cri-containerd-0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9.scope - libcontainer container 0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9. Jul 2 06:55:41.713722 systemd[1]: Started cri-containerd-7f8e8673b118fae2a30dea858c940323e835780f1f185fb6c786849ff60bcf71.scope - libcontainer container 7f8e8673b118fae2a30dea858c940323e835780f1f185fb6c786849ff60bcf71. Jul 2 06:55:41.793360 containerd[1497]: time="2024-07-02T06:55:41.792929586Z" level=info msg="StartContainer for \"7f8e8673b118fae2a30dea858c940323e835780f1f185fb6c786849ff60bcf71\" returns successfully" Jul 2 06:55:41.814371 containerd[1497]: time="2024-07-02T06:55:41.814188850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-dsqhc,Uid:f8b54788-00d3-4831-a3cd-bde068fc8a41,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9\"" Jul 2 06:55:48.955743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925469739.mount: Deactivated successfully. Jul 2 06:55:52.424534 containerd[1497]: time="2024-07-02T06:55:52.424396103Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:52.426828 containerd[1497]: time="2024-07-02T06:55:52.426782655Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735295" Jul 2 06:55:52.427655 containerd[1497]: time="2024-07-02T06:55:52.427410481Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:52.438084 containerd[1497]: time="2024-07-02T06:55:52.437870445Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.845666322s" Jul 2 06:55:52.438084 containerd[1497]: time="2024-07-02T06:55:52.437942915Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 06:55:52.447901 containerd[1497]: time="2024-07-02T06:55:52.446564762Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 06:55:52.453875 containerd[1497]: time="2024-07-02T06:55:52.453813543Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 06:55:52.531782 containerd[1497]: time="2024-07-02T06:55:52.531705197Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\"" Jul 2 06:55:52.532868 containerd[1497]: time="2024-07-02T06:55:52.532708895Z" level=info msg="StartContainer for \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\"" Jul 2 06:55:52.669571 systemd[1]: Started cri-containerd-d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575.scope - libcontainer container d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575. Jul 2 06:55:52.734091 containerd[1497]: time="2024-07-02T06:55:52.733878758Z" level=info msg="StartContainer for \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\" returns successfully" Jul 2 06:55:52.766707 systemd[1]: cri-containerd-d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575.scope: Deactivated successfully. Jul 2 06:55:53.006560 containerd[1497]: time="2024-07-02T06:55:53.006296517Z" level=info msg="shim disconnected" id=d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575 namespace=k8s.io Jul 2 06:55:53.006560 containerd[1497]: time="2024-07-02T06:55:53.006412179Z" level=warning msg="cleaning up after shim disconnected" id=d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575 namespace=k8s.io Jul 2 06:55:53.006560 containerd[1497]: time="2024-07-02T06:55:53.006456019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:53.377382 containerd[1497]: time="2024-07-02T06:55:53.375705661Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 06:55:53.400217 containerd[1497]: time="2024-07-02T06:55:53.400121493Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\"" Jul 2 06:55:53.401554 containerd[1497]: time="2024-07-02T06:55:53.401518905Z" level=info msg="StartContainer for \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\"" Jul 2 06:55:53.411522 kubelet[2658]: I0702 06:55:53.411451 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c7bbj" podStartSLOduration=13.409304319 podCreationTimestamp="2024-07-02 06:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:55:42.348850828 +0000 UTC m=+13.387617457" watchObservedRunningTime="2024-07-02 06:55:53.409304319 +0000 UTC m=+24.448070944" Jul 2 06:55:53.464774 systemd[1]: Started cri-containerd-50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7.scope - libcontainer container 50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7. Jul 2 06:55:53.507266 containerd[1497]: time="2024-07-02T06:55:53.507187150Z" level=info msg="StartContainer for \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\" returns successfully" Jul 2 06:55:53.524782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575-rootfs.mount: Deactivated successfully. Jul 2 06:55:53.542805 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 06:55:53.543208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:55:53.543397 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:55:53.555566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:55:53.560268 systemd[1]: cri-containerd-50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7.scope: Deactivated successfully. Jul 2 06:55:53.627746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7-rootfs.mount: Deactivated successfully. Jul 2 06:55:53.631358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:55:53.635473 containerd[1497]: time="2024-07-02T06:55:53.635176855Z" level=info msg="shim disconnected" id=50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7 namespace=k8s.io Jul 2 06:55:53.635473 containerd[1497]: time="2024-07-02T06:55:53.635263046Z" level=warning msg="cleaning up after shim disconnected" id=50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7 namespace=k8s.io Jul 2 06:55:53.635473 containerd[1497]: time="2024-07-02T06:55:53.635284664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:53.666457 containerd[1497]: time="2024-07-02T06:55:53.665209106Z" level=warning msg="cleanup warnings time=\"2024-07-02T06:55:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 06:55:53.910882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101954103.mount: Deactivated successfully. Jul 2 06:55:54.403596 containerd[1497]: time="2024-07-02T06:55:54.403540041Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 06:55:54.456441 containerd[1497]: time="2024-07-02T06:55:54.455495348Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\"" Jul 2 06:55:54.456441 containerd[1497]: time="2024-07-02T06:55:54.456257600Z" level=info msg="StartContainer for \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\"" Jul 2 06:55:54.524498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807377544.mount: Deactivated successfully. Jul 2 06:55:54.550729 systemd[1]: Started cri-containerd-8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b.scope - libcontainer container 8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b. Jul 2 06:55:54.625067 containerd[1497]: time="2024-07-02T06:55:54.625007420Z" level=info msg="StartContainer for \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\" returns successfully" Jul 2 06:55:54.631477 systemd[1]: cri-containerd-8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b.scope: Deactivated successfully. Jul 2 06:55:54.693837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b-rootfs.mount: Deactivated successfully. Jul 2 06:55:54.746103 containerd[1497]: time="2024-07-02T06:55:54.745998716Z" level=info msg="shim disconnected" id=8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b namespace=k8s.io Jul 2 06:55:54.747020 containerd[1497]: time="2024-07-02T06:55:54.746989620Z" level=warning msg="cleaning up after shim disconnected" id=8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b namespace=k8s.io Jul 2 06:55:54.747386 containerd[1497]: time="2024-07-02T06:55:54.747106218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:54.790265 containerd[1497]: time="2024-07-02T06:55:54.790174649Z" level=warning msg="cleanup warnings time=\"2024-07-02T06:55:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 06:55:55.277047 containerd[1497]: time="2024-07-02T06:55:55.276981618Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:55.277917 containerd[1497]: time="2024-07-02T06:55:55.277842206Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Jul 2 06:55:55.279318 containerd[1497]: time="2024-07-02T06:55:55.279107250Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:55.281995 containerd[1497]: time="2024-07-02T06:55:55.281945738Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.827878013s" Jul 2 06:55:55.282078 containerd[1497]: time="2024-07-02T06:55:55.282004854Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 06:55:55.285833 containerd[1497]: time="2024-07-02T06:55:55.285780783Z" level=info msg="CreateContainer within sandbox \"0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 06:55:55.304324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970055906.mount: Deactivated successfully. Jul 2 06:55:55.307046 containerd[1497]: time="2024-07-02T06:55:55.306960972Z" level=info msg="CreateContainer within sandbox \"0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\"" Jul 2 06:55:55.308908 containerd[1497]: time="2024-07-02T06:55:55.308854947Z" level=info msg="StartContainer for \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\"" Jul 2 06:55:55.351759 systemd[1]: Started cri-containerd-4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e.scope - libcontainer container 4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e. Jul 2 06:55:55.414749 containerd[1497]: time="2024-07-02T06:55:55.414668280Z" level=info msg="StartContainer for \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\" returns successfully" Jul 2 06:55:55.427916 containerd[1497]: time="2024-07-02T06:55:55.427862614Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 06:55:55.473808 containerd[1497]: time="2024-07-02T06:55:55.473679770Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\"" Jul 2 06:55:55.477515 containerd[1497]: time="2024-07-02T06:55:55.477474178Z" level=info msg="StartContainer for \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\"" Jul 2 06:55:55.546773 systemd[1]: Started cri-containerd-0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092.scope - libcontainer container 0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092. Jul 2 06:55:55.620659 systemd[1]: cri-containerd-0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092.scope: Deactivated successfully. Jul 2 06:55:55.628480 containerd[1497]: time="2024-07-02T06:55:55.625851155Z" level=info msg="StartContainer for \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\" returns successfully" Jul 2 06:55:55.675514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092-rootfs.mount: Deactivated successfully. Jul 2 06:55:55.742318 containerd[1497]: time="2024-07-02T06:55:55.741876452Z" level=info msg="shim disconnected" id=0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092 namespace=k8s.io Jul 2 06:55:55.742318 containerd[1497]: time="2024-07-02T06:55:55.741986191Z" level=warning msg="cleaning up after shim disconnected" id=0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092 namespace=k8s.io Jul 2 06:55:55.742318 containerd[1497]: time="2024-07-02T06:55:55.742005448Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:55:55.779106 containerd[1497]: time="2024-07-02T06:55:55.778450151Z" level=warning msg="cleanup warnings time=\"2024-07-02T06:55:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 06:55:56.439284 containerd[1497]: time="2024-07-02T06:55:56.439223552Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 06:55:56.501991 containerd[1497]: time="2024-07-02T06:55:56.501558977Z" level=info msg="CreateContainer within sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\"" Jul 2 06:55:56.505445 containerd[1497]: time="2024-07-02T06:55:56.502978615Z" level=info msg="StartContainer for \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\"" Jul 2 06:55:56.590676 systemd[1]: Started cri-containerd-df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1.scope - libcontainer container df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1. Jul 2 06:55:56.708212 containerd[1497]: time="2024-07-02T06:55:56.708061375Z" level=info msg="StartContainer for \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\" returns successfully" Jul 2 06:55:57.087639 kubelet[2658]: I0702 06:55:57.087513 2658 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 06:55:57.151129 kubelet[2658]: I0702 06:55:57.151050 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-dsqhc" podStartSLOduration=2.687607158 podCreationTimestamp="2024-07-02 06:55:41 +0000 UTC" firstStartedPulling="2024-07-02 06:55:41.819316581 +0000 UTC m=+12.858083193" lastFinishedPulling="2024-07-02 06:55:55.282687136 +0000 UTC m=+26.321453750" observedRunningTime="2024-07-02 06:55:56.776559541 +0000 UTC m=+27.815326191" watchObservedRunningTime="2024-07-02 06:55:57.150977715 +0000 UTC m=+28.189744354" Jul 2 06:55:57.152293 kubelet[2658]: I0702 06:55:57.152258 2658 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ef03e-1909-4289-82f3-8715ffdf63bc" podNamespace="kube-system" podName="coredns-5dd5756b68-pzwzl" Jul 2 06:55:57.156682 kubelet[2658]: I0702 06:55:57.156629 2658 topology_manager.go:215] "Topology Admit Handler" podUID="fcb82fa3-f268-4cf2-ab40-74f4404bceac" podNamespace="kube-system" podName="coredns-5dd5756b68-wbc45" Jul 2 06:55:57.168071 systemd[1]: Created slice kubepods-burstable-pod8a5ef03e_1909_4289_82f3_8715ffdf63bc.slice - libcontainer container kubepods-burstable-pod8a5ef03e_1909_4289_82f3_8715ffdf63bc.slice. Jul 2 06:55:57.180887 systemd[1]: Created slice kubepods-burstable-podfcb82fa3_f268_4cf2_ab40_74f4404bceac.slice - libcontainer container kubepods-burstable-podfcb82fa3_f268_4cf2_ab40_74f4404bceac.slice. Jul 2 06:55:57.189184 kubelet[2658]: I0702 06:55:57.189005 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcb82fa3-f268-4cf2-ab40-74f4404bceac-config-volume\") pod \"coredns-5dd5756b68-wbc45\" (UID: \"fcb82fa3-f268-4cf2-ab40-74f4404bceac\") " pod="kube-system/coredns-5dd5756b68-wbc45" Jul 2 06:55:57.189184 kubelet[2658]: I0702 06:55:57.189063 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a5ef03e-1909-4289-82f3-8715ffdf63bc-config-volume\") pod \"coredns-5dd5756b68-pzwzl\" (UID: \"8a5ef03e-1909-4289-82f3-8715ffdf63bc\") " pod="kube-system/coredns-5dd5756b68-pzwzl" Jul 2 06:55:57.189184 kubelet[2658]: I0702 06:55:57.189100 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmcb\" (UniqueName: \"kubernetes.io/projected/8a5ef03e-1909-4289-82f3-8715ffdf63bc-kube-api-access-lxmcb\") pod \"coredns-5dd5756b68-pzwzl\" (UID: \"8a5ef03e-1909-4289-82f3-8715ffdf63bc\") " pod="kube-system/coredns-5dd5756b68-pzwzl" Jul 2 06:55:57.189184 kubelet[2658]: I0702 06:55:57.189139 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vcq7\" (UniqueName: \"kubernetes.io/projected/fcb82fa3-f268-4cf2-ab40-74f4404bceac-kube-api-access-8vcq7\") pod \"coredns-5dd5756b68-wbc45\" (UID: \"fcb82fa3-f268-4cf2-ab40-74f4404bceac\") " pod="kube-system/coredns-5dd5756b68-wbc45" Jul 2 06:55:57.472630 kubelet[2658]: I0702 06:55:57.472500 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7xplc" podStartSLOduration=6.624147197 podCreationTimestamp="2024-07-02 06:55:40 +0000 UTC" firstStartedPulling="2024-07-02 06:55:41.591076639 +0000 UTC m=+12.629843252" lastFinishedPulling="2024-07-02 06:55:52.439377822 +0000 UTC m=+23.478144435" observedRunningTime="2024-07-02 06:55:57.469353796 +0000 UTC m=+28.508120431" watchObservedRunningTime="2024-07-02 06:55:57.47244838 +0000 UTC m=+28.511215003" Jul 2 06:55:57.476860 containerd[1497]: time="2024-07-02T06:55:57.475741001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pzwzl,Uid:8a5ef03e-1909-4289-82f3-8715ffdf63bc,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:57.489230 containerd[1497]: time="2024-07-02T06:55:57.489193636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wbc45,Uid:fcb82fa3-f268-4cf2-ab40-74f4404bceac,Namespace:kube-system,Attempt:0,}" Jul 2 06:55:59.646068 systemd-networkd[1415]: cilium_host: Link UP Jul 2 06:55:59.649310 systemd-networkd[1415]: cilium_net: Link UP Jul 2 06:55:59.651021 systemd-networkd[1415]: cilium_net: Gained carrier Jul 2 06:55:59.651358 systemd-networkd[1415]: cilium_host: Gained carrier Jul 2 06:55:59.651649 systemd-networkd[1415]: cilium_net: Gained IPv6LL Jul 2 06:55:59.651946 systemd-networkd[1415]: cilium_host: Gained IPv6LL Jul 2 06:55:59.815741 systemd-networkd[1415]: cilium_vxlan: Link UP Jul 2 06:55:59.815754 systemd-networkd[1415]: cilium_vxlan: Gained carrier Jul 2 06:56:00.364849 kernel: NET: Registered PF_ALG protocol family Jul 2 06:56:01.127709 systemd-networkd[1415]: cilium_vxlan: Gained IPv6LL Jul 2 06:56:01.510692 systemd-networkd[1415]: lxc_health: Link UP Jul 2 06:56:01.517658 systemd-networkd[1415]: lxc_health: Gained carrier Jul 2 06:56:02.089567 systemd-networkd[1415]: lxc340785c37e48: Link UP Jul 2 06:56:02.096306 kernel: eth0: renamed from tmpb4781 Jul 2 06:56:02.105558 systemd-networkd[1415]: lxc340785c37e48: Gained carrier Jul 2 06:56:02.155612 systemd-networkd[1415]: lxcc8e5dad2b036: Link UP Jul 2 06:56:02.161744 kernel: eth0: renamed from tmpdc3e6 Jul 2 06:56:02.170547 systemd-networkd[1415]: lxcc8e5dad2b036: Gained carrier Jul 2 06:56:03.175774 systemd-networkd[1415]: lxc_health: Gained IPv6LL Jul 2 06:56:03.303595 systemd-networkd[1415]: lxc340785c37e48: Gained IPv6LL Jul 2 06:56:03.815683 systemd-networkd[1415]: lxcc8e5dad2b036: Gained IPv6LL Jul 2 06:56:07.824084 containerd[1497]: time="2024-07-02T06:56:07.823844687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:07.824084 containerd[1497]: time="2024-07-02T06:56:07.823968724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:07.824084 containerd[1497]: time="2024-07-02T06:56:07.824004258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:07.824084 containerd[1497]: time="2024-07-02T06:56:07.824035757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:07.906716 systemd[1]: Started cri-containerd-dc3e67e6e7d10f5696f7b7c9c9820b20d9d6f2b5d1f641d024b34f7a39e35d4a.scope - libcontainer container dc3e67e6e7d10f5696f7b7c9c9820b20d9d6f2b5d1f641d024b34f7a39e35d4a. Jul 2 06:56:07.941942 containerd[1497]: time="2024-07-02T06:56:07.941784195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:07.942267 containerd[1497]: time="2024-07-02T06:56:07.942218429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:07.942502 containerd[1497]: time="2024-07-02T06:56:07.942425804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:07.942686 containerd[1497]: time="2024-07-02T06:56:07.942625900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:08.026817 systemd[1]: Started cri-containerd-b4781a05e757db20078fdcae2256d1fdfe5f8ad1315f2fc0024a7ec69560dbd9.scope - libcontainer container b4781a05e757db20078fdcae2256d1fdfe5f8ad1315f2fc0024a7ec69560dbd9. Jul 2 06:56:08.048418 containerd[1497]: time="2024-07-02T06:56:08.048361630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wbc45,Uid:fcb82fa3-f268-4cf2-ab40-74f4404bceac,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc3e67e6e7d10f5696f7b7c9c9820b20d9d6f2b5d1f641d024b34f7a39e35d4a\"" Jul 2 06:56:08.059998 containerd[1497]: time="2024-07-02T06:56:08.059755954Z" level=info msg="CreateContainer within sandbox \"dc3e67e6e7d10f5696f7b7c9c9820b20d9d6f2b5d1f641d024b34f7a39e35d4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:56:08.086943 containerd[1497]: time="2024-07-02T06:56:08.086894619Z" level=info msg="CreateContainer within sandbox \"dc3e67e6e7d10f5696f7b7c9c9820b20d9d6f2b5d1f641d024b34f7a39e35d4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3753798e31cb7cf35088b94af2d77dd0a24319e6f5b156a644ff1d0e99f095f5\"" Jul 2 06:56:08.088349 containerd[1497]: time="2024-07-02T06:56:08.088267730Z" level=info msg="StartContainer for \"3753798e31cb7cf35088b94af2d77dd0a24319e6f5b156a644ff1d0e99f095f5\"" Jul 2 06:56:08.152651 systemd[1]: Started cri-containerd-3753798e31cb7cf35088b94af2d77dd0a24319e6f5b156a644ff1d0e99f095f5.scope - libcontainer container 3753798e31cb7cf35088b94af2d77dd0a24319e6f5b156a644ff1d0e99f095f5. Jul 2 06:56:08.167816 containerd[1497]: time="2024-07-02T06:56:08.167752582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pzwzl,Uid:8a5ef03e-1909-4289-82f3-8715ffdf63bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4781a05e757db20078fdcae2256d1fdfe5f8ad1315f2fc0024a7ec69560dbd9\"" Jul 2 06:56:08.175144 containerd[1497]: time="2024-07-02T06:56:08.175078371Z" level=info msg="CreateContainer within sandbox \"b4781a05e757db20078fdcae2256d1fdfe5f8ad1315f2fc0024a7ec69560dbd9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:56:08.208351 containerd[1497]: time="2024-07-02T06:56:08.208239040Z" level=info msg="CreateContainer within sandbox \"b4781a05e757db20078fdcae2256d1fdfe5f8ad1315f2fc0024a7ec69560dbd9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8be4da2d3edfbed445a70e9588c80438c083ee383fbb8f256009b90fe1b3ee8a\"" Jul 2 06:56:08.210169 containerd[1497]: time="2024-07-02T06:56:08.210112555Z" level=info msg="StartContainer for \"8be4da2d3edfbed445a70e9588c80438c083ee383fbb8f256009b90fe1b3ee8a\"" Jul 2 06:56:08.219011 containerd[1497]: time="2024-07-02T06:56:08.218814651Z" level=info msg="StartContainer for \"3753798e31cb7cf35088b94af2d77dd0a24319e6f5b156a644ff1d0e99f095f5\" returns successfully" Jul 2 06:56:08.259826 systemd[1]: Started cri-containerd-8be4da2d3edfbed445a70e9588c80438c083ee383fbb8f256009b90fe1b3ee8a.scope - libcontainer container 8be4da2d3edfbed445a70e9588c80438c083ee383fbb8f256009b90fe1b3ee8a. Jul 2 06:56:08.310700 containerd[1497]: time="2024-07-02T06:56:08.310276515Z" level=info msg="StartContainer for \"8be4da2d3edfbed445a70e9588c80438c083ee383fbb8f256009b90fe1b3ee8a\" returns successfully" Jul 2 06:56:08.499584 kubelet[2658]: I0702 06:56:08.499203 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wbc45" podStartSLOduration=27.499152115 podCreationTimestamp="2024-07-02 06:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:56:08.497268973 +0000 UTC m=+39.536035606" watchObservedRunningTime="2024-07-02 06:56:08.499152115 +0000 UTC m=+39.537918740" Jul 2 06:56:08.516116 kubelet[2658]: I0702 06:56:08.515715 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pzwzl" podStartSLOduration=27.515650848 podCreationTimestamp="2024-07-02 06:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:56:08.514244421 +0000 UTC m=+39.553011062" watchObservedRunningTime="2024-07-02 06:56:08.515650848 +0000 UTC m=+39.554417473" Jul 2 06:56:08.845891 systemd[1]: run-containerd-runc-k8s.io-b4781a05e757db20078fdcae2256d1fdfe5f8ad1315f2fc0024a7ec69560dbd9-runc.V4k1yK.mount: Deactivated successfully. Jul 2 06:56:40.394772 systemd[1]: Started sshd@9-10.244.24.146:22-139.178.89.65:48710.service - OpenSSH per-connection server daemon (139.178.89.65:48710). Jul 2 06:56:41.298704 sshd[4048]: Accepted publickey for core from 139.178.89.65 port 48710 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:56:41.302863 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:41.312257 systemd-logind[1479]: New session 12 of user core. Jul 2 06:56:41.321816 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 06:56:42.427847 sshd[4048]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:42.433237 systemd[1]: sshd@9-10.244.24.146:22-139.178.89.65:48710.service: Deactivated successfully. Jul 2 06:56:42.437146 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 06:56:42.439517 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Jul 2 06:56:42.440981 systemd-logind[1479]: Removed session 12. Jul 2 06:56:47.591929 systemd[1]: Started sshd@10-10.244.24.146:22-139.178.89.65:48716.service - OpenSSH per-connection server daemon (139.178.89.65:48716). Jul 2 06:56:48.481507 sshd[4064]: Accepted publickey for core from 139.178.89.65 port 48716 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:56:48.483872 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:48.492748 systemd-logind[1479]: New session 13 of user core. Jul 2 06:56:48.500711 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 06:56:49.206292 sshd[4064]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:49.211061 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Jul 2 06:56:49.211902 systemd[1]: sshd@10-10.244.24.146:22-139.178.89.65:48716.service: Deactivated successfully. Jul 2 06:56:49.216823 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 06:56:49.220377 systemd-logind[1479]: Removed session 13. Jul 2 06:56:54.365021 systemd[1]: Started sshd@11-10.244.24.146:22-139.178.89.65:52396.service - OpenSSH per-connection server daemon (139.178.89.65:52396). Jul 2 06:56:55.245085 sshd[4078]: Accepted publickey for core from 139.178.89.65 port 52396 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:56:55.247391 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:55.258830 systemd-logind[1479]: New session 14 of user core. Jul 2 06:56:55.265683 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 06:56:55.946355 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:55.952027 systemd[1]: sshd@11-10.244.24.146:22-139.178.89.65:52396.service: Deactivated successfully. Jul 2 06:56:55.954997 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 06:56:55.956090 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Jul 2 06:56:55.957672 systemd-logind[1479]: Removed session 14. Jul 2 06:57:01.107485 systemd[1]: Started sshd@12-10.244.24.146:22-139.178.89.65:57168.service - OpenSSH per-connection server daemon (139.178.89.65:57168). Jul 2 06:57:01.990485 sshd[4092]: Accepted publickey for core from 139.178.89.65 port 57168 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:01.991929 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:02.001713 systemd-logind[1479]: New session 15 of user core. Jul 2 06:57:02.004653 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 06:57:02.743595 sshd[4092]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:02.749078 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Jul 2 06:57:02.750199 systemd[1]: sshd@12-10.244.24.146:22-139.178.89.65:57168.service: Deactivated successfully. Jul 2 06:57:02.754009 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 06:57:02.757599 systemd-logind[1479]: Removed session 15. Jul 2 06:57:02.900863 systemd[1]: Started sshd@13-10.244.24.146:22-139.178.89.65:57174.service - OpenSSH per-connection server daemon (139.178.89.65:57174). Jul 2 06:57:03.785110 sshd[4108]: Accepted publickey for core from 139.178.89.65 port 57174 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:03.787391 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:03.796019 systemd-logind[1479]: New session 16 of user core. Jul 2 06:57:03.804790 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 06:57:05.614975 sshd[4108]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:05.623247 systemd[1]: sshd@13-10.244.24.146:22-139.178.89.65:57174.service: Deactivated successfully. Jul 2 06:57:05.624157 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Jul 2 06:57:05.627343 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 06:57:05.631931 systemd-logind[1479]: Removed session 16. Jul 2 06:57:05.766844 systemd[1]: Started sshd@14-10.244.24.146:22-139.178.89.65:57178.service - OpenSSH per-connection server daemon (139.178.89.65:57178). Jul 2 06:57:06.664625 sshd[4119]: Accepted publickey for core from 139.178.89.65 port 57178 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:06.666796 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:06.674660 systemd-logind[1479]: New session 17 of user core. Jul 2 06:57:06.681816 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 06:57:07.379063 sshd[4119]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:07.385363 systemd[1]: sshd@14-10.244.24.146:22-139.178.89.65:57178.service: Deactivated successfully. Jul 2 06:57:07.388944 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 06:57:07.390074 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Jul 2 06:57:07.392192 systemd-logind[1479]: Removed session 17. Jul 2 06:57:12.567885 systemd[1]: Started sshd@15-10.244.24.146:22-139.178.89.65:40508.service - OpenSSH per-connection server daemon (139.178.89.65:40508). Jul 2 06:57:13.462685 sshd[4134]: Accepted publickey for core from 139.178.89.65 port 40508 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:13.465368 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:13.473327 systemd-logind[1479]: New session 18 of user core. Jul 2 06:57:13.480923 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 06:57:14.162721 sshd[4134]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:14.168123 systemd[1]: sshd@15-10.244.24.146:22-139.178.89.65:40508.service: Deactivated successfully. Jul 2 06:57:14.172158 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 06:57:14.173669 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Jul 2 06:57:14.175724 systemd-logind[1479]: Removed session 18. Jul 2 06:57:19.315759 systemd[1]: Started sshd@16-10.244.24.146:22-139.178.89.65:46408.service - OpenSSH per-connection server daemon (139.178.89.65:46408). Jul 2 06:57:20.183547 sshd[4147]: Accepted publickey for core from 139.178.89.65 port 46408 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:20.185867 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:20.193171 systemd-logind[1479]: New session 19 of user core. Jul 2 06:57:20.202777 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 06:57:20.878688 sshd[4147]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:20.884685 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Jul 2 06:57:20.885184 systemd[1]: sshd@16-10.244.24.146:22-139.178.89.65:46408.service: Deactivated successfully. Jul 2 06:57:20.888115 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 06:57:20.889586 systemd-logind[1479]: Removed session 19. Jul 2 06:57:21.030636 systemd[1]: Started sshd@17-10.244.24.146:22-139.178.89.65:46416.service - OpenSSH per-connection server daemon (139.178.89.65:46416). Jul 2 06:57:21.913475 sshd[4160]: Accepted publickey for core from 139.178.89.65 port 46416 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:21.915698 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:21.926613 systemd-logind[1479]: New session 20 of user core. Jul 2 06:57:21.933782 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 06:57:23.076113 sshd[4160]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:23.082193 systemd[1]: sshd@17-10.244.24.146:22-139.178.89.65:46416.service: Deactivated successfully. Jul 2 06:57:23.086408 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 06:57:23.088521 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Jul 2 06:57:23.090931 systemd-logind[1479]: Removed session 20. Jul 2 06:57:23.229845 systemd[1]: Started sshd@18-10.244.24.146:22-139.178.89.65:46424.service - OpenSSH per-connection server daemon (139.178.89.65:46424). Jul 2 06:57:24.117344 sshd[4171]: Accepted publickey for core from 139.178.89.65 port 46424 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:24.119690 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:24.129070 systemd-logind[1479]: New session 21 of user core. Jul 2 06:57:24.135838 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 06:57:26.009739 sshd[4171]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:26.013955 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Jul 2 06:57:26.014548 systemd[1]: sshd@18-10.244.24.146:22-139.178.89.65:46424.service: Deactivated successfully. Jul 2 06:57:26.017417 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 06:57:26.020292 systemd-logind[1479]: Removed session 21. Jul 2 06:57:26.171764 systemd[1]: Started sshd@19-10.244.24.146:22-139.178.89.65:46430.service - OpenSSH per-connection server daemon (139.178.89.65:46430). Jul 2 06:57:27.056178 sshd[4189]: Accepted publickey for core from 139.178.89.65 port 46430 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:27.058844 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:27.066054 systemd-logind[1479]: New session 22 of user core. Jul 2 06:57:27.074699 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 06:57:28.047739 sshd[4189]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:28.054016 systemd[1]: sshd@19-10.244.24.146:22-139.178.89.65:46430.service: Deactivated successfully. Jul 2 06:57:28.057211 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 06:57:28.058492 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Jul 2 06:57:28.061389 systemd-logind[1479]: Removed session 22. Jul 2 06:57:28.208602 systemd[1]: Started sshd@20-10.244.24.146:22-139.178.89.65:58762.service - OpenSSH per-connection server daemon (139.178.89.65:58762). Jul 2 06:57:29.094495 sshd[4201]: Accepted publickey for core from 139.178.89.65 port 58762 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:29.095650 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:29.103991 systemd-logind[1479]: New session 23 of user core. Jul 2 06:57:29.110674 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 06:57:29.807786 sshd[4201]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:29.815106 systemd[1]: sshd@20-10.244.24.146:22-139.178.89.65:58762.service: Deactivated successfully. Jul 2 06:57:29.818294 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 06:57:29.820899 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Jul 2 06:57:29.822712 systemd-logind[1479]: Removed session 23. Jul 2 06:57:34.970079 systemd[1]: Started sshd@21-10.244.24.146:22-139.178.89.65:58766.service - OpenSSH per-connection server daemon (139.178.89.65:58766). Jul 2 06:57:35.838534 sshd[4219]: Accepted publickey for core from 139.178.89.65 port 58766 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:35.840843 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:35.848560 systemd-logind[1479]: New session 24 of user core. Jul 2 06:57:35.856690 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 06:57:36.526098 sshd[4219]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:36.532228 systemd[1]: sshd@21-10.244.24.146:22-139.178.89.65:58766.service: Deactivated successfully. Jul 2 06:57:36.535250 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 06:57:36.536517 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. Jul 2 06:57:36.538301 systemd-logind[1479]: Removed session 24. Jul 2 06:57:41.682011 systemd[1]: Started sshd@22-10.244.24.146:22-139.178.89.65:38490.service - OpenSSH per-connection server daemon (139.178.89.65:38490). Jul 2 06:57:42.557391 sshd[4232]: Accepted publickey for core from 139.178.89.65 port 38490 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:42.558994 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:42.567875 systemd-logind[1479]: New session 25 of user core. Jul 2 06:57:42.578699 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 06:57:43.251096 sshd[4232]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:43.257327 systemd[1]: sshd@22-10.244.24.146:22-139.178.89.65:38490.service: Deactivated successfully. Jul 2 06:57:43.261047 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 06:57:43.262322 systemd-logind[1479]: Session 25 logged out. Waiting for processes to exit. Jul 2 06:57:43.264314 systemd-logind[1479]: Removed session 25. Jul 2 06:57:48.406810 systemd[1]: Started sshd@23-10.244.24.146:22-139.178.89.65:56488.service - OpenSSH per-connection server daemon (139.178.89.65:56488). Jul 2 06:57:49.278570 sshd[4247]: Accepted publickey for core from 139.178.89.65 port 56488 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:49.280773 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:49.287130 systemd-logind[1479]: New session 26 of user core. Jul 2 06:57:49.291640 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 06:57:49.965204 sshd[4247]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:49.970208 systemd[1]: sshd@23-10.244.24.146:22-139.178.89.65:56488.service: Deactivated successfully. Jul 2 06:57:49.972443 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 06:57:49.973473 systemd-logind[1479]: Session 26 logged out. Waiting for processes to exit. Jul 2 06:57:49.974768 systemd-logind[1479]: Removed session 26. Jul 2 06:57:50.122779 systemd[1]: Started sshd@24-10.244.24.146:22-139.178.89.65:56502.service - OpenSSH per-connection server daemon (139.178.89.65:56502). Jul 2 06:57:50.988484 sshd[4259]: Accepted publickey for core from 139.178.89.65 port 56502 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:50.991222 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:50.999535 systemd-logind[1479]: New session 27 of user core. Jul 2 06:57:51.005637 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 06:57:52.920996 containerd[1497]: time="2024-07-02T06:57:52.920789527Z" level=info msg="StopContainer for \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\" with timeout 30 (s)" Jul 2 06:57:52.939439 systemd[1]: run-containerd-runc-k8s.io-df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1-runc.IWwKw0.mount: Deactivated successfully. Jul 2 06:57:52.941449 containerd[1497]: time="2024-07-02T06:57:52.940817335Z" level=info msg="Stop container \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\" with signal terminated" Jul 2 06:57:52.973395 containerd[1497]: time="2024-07-02T06:57:52.973332096Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 06:57:52.982704 containerd[1497]: time="2024-07-02T06:57:52.982652854Z" level=info msg="StopContainer for \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\" with timeout 2 (s)" Jul 2 06:57:52.983589 containerd[1497]: time="2024-07-02T06:57:52.983553630Z" level=info msg="Stop container \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\" with signal terminated" Jul 2 06:57:52.984251 systemd[1]: cri-containerd-4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e.scope: Deactivated successfully. Jul 2 06:57:53.004371 systemd-networkd[1415]: lxc_health: Link DOWN Jul 2 06:57:53.004387 systemd-networkd[1415]: lxc_health: Lost carrier Jul 2 06:57:53.023915 systemd[1]: cri-containerd-df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1.scope: Deactivated successfully. Jul 2 06:57:53.025548 systemd[1]: cri-containerd-df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1.scope: Consumed 10.283s CPU time. Jul 2 06:57:53.047449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e-rootfs.mount: Deactivated successfully. Jul 2 06:57:53.067768 containerd[1497]: time="2024-07-02T06:57:53.067447030Z" level=info msg="shim disconnected" id=4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e namespace=k8s.io Jul 2 06:57:53.067768 containerd[1497]: time="2024-07-02T06:57:53.067544830Z" level=warning msg="cleaning up after shim disconnected" id=4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e namespace=k8s.io Jul 2 06:57:53.067768 containerd[1497]: time="2024-07-02T06:57:53.067562501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:53.074440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1-rootfs.mount: Deactivated successfully. Jul 2 06:57:53.091143 containerd[1497]: time="2024-07-02T06:57:53.090826447Z" level=info msg="shim disconnected" id=df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1 namespace=k8s.io Jul 2 06:57:53.091687 containerd[1497]: time="2024-07-02T06:57:53.091022479Z" level=warning msg="cleaning up after shim disconnected" id=df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1 namespace=k8s.io Jul 2 06:57:53.091687 containerd[1497]: time="2024-07-02T06:57:53.091354880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:53.121621 containerd[1497]: time="2024-07-02T06:57:53.121547402Z" level=info msg="StopContainer for \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\" returns successfully" Jul 2 06:57:53.122397 containerd[1497]: time="2024-07-02T06:57:53.122363692Z" level=info msg="StopPodSandbox for \"0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9\"" Jul 2 06:57:53.124784 containerd[1497]: time="2024-07-02T06:57:53.124749637Z" level=info msg="StopContainer for \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\" returns successfully" Jul 2 06:57:53.125986 containerd[1497]: time="2024-07-02T06:57:53.125823379Z" level=info msg="StopPodSandbox for \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\"" Jul 2 06:57:53.127586 containerd[1497]: time="2024-07-02T06:57:53.125866812Z" level=info msg="Container to stop \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:57:53.127954 containerd[1497]: time="2024-07-02T06:57:53.127739274Z" level=info msg="Container to stop \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:57:53.127954 containerd[1497]: time="2024-07-02T06:57:53.127775120Z" level=info msg="Container to stop \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:57:53.127954 containerd[1497]: time="2024-07-02T06:57:53.127795126Z" level=info msg="Container to stop \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:57:53.127954 containerd[1497]: time="2024-07-02T06:57:53.127811243Z" level=info msg="Container to stop \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:57:53.127954 containerd[1497]: time="2024-07-02T06:57:53.122513600Z" level=info msg="Container to stop \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:57:53.131032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9-shm.mount: Deactivated successfully. Jul 2 06:57:53.133525 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e-shm.mount: Deactivated successfully. Jul 2 06:57:53.145660 systemd[1]: cri-containerd-6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e.scope: Deactivated successfully. Jul 2 06:57:53.164906 systemd[1]: cri-containerd-0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9.scope: Deactivated successfully. Jul 2 06:57:53.199979 containerd[1497]: time="2024-07-02T06:57:53.199826838Z" level=info msg="shim disconnected" id=6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e namespace=k8s.io Jul 2 06:57:53.199979 containerd[1497]: time="2024-07-02T06:57:53.199894204Z" level=warning msg="cleaning up after shim disconnected" id=6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e namespace=k8s.io Jul 2 06:57:53.199979 containerd[1497]: time="2024-07-02T06:57:53.199909967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:53.215763 containerd[1497]: time="2024-07-02T06:57:53.215379057Z" level=info msg="shim disconnected" id=0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9 namespace=k8s.io Jul 2 06:57:53.215763 containerd[1497]: time="2024-07-02T06:57:53.215560465Z" level=warning msg="cleaning up after shim disconnected" id=0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9 namespace=k8s.io Jul 2 06:57:53.215763 containerd[1497]: time="2024-07-02T06:57:53.215577904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:53.234238 containerd[1497]: time="2024-07-02T06:57:53.233482813Z" level=info msg="TearDown network for sandbox \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" successfully" Jul 2 06:57:53.234238 containerd[1497]: time="2024-07-02T06:57:53.233544598Z" level=info msg="StopPodSandbox for \"6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e\" returns successfully" Jul 2 06:57:53.261038 containerd[1497]: time="2024-07-02T06:57:53.260959075Z" level=info msg="TearDown network for sandbox \"0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9\" successfully" Jul 2 06:57:53.261038 containerd[1497]: time="2024-07-02T06:57:53.261026001Z" level=info msg="StopPodSandbox for \"0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9\" returns successfully" Jul 2 06:57:53.433290 kubelet[2658]: I0702 06:57:53.433188 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmnxx\" (UniqueName: \"kubernetes.io/projected/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-kube-api-access-bmnxx\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434148 kubelet[2658]: I0702 06:57:53.433307 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-bpf-maps\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434148 kubelet[2658]: I0702 06:57:53.433356 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-host-proc-sys-kernel\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434148 kubelet[2658]: I0702 06:57:53.433389 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-cgroup\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434148 kubelet[2658]: I0702 06:57:53.433417 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-run\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434148 kubelet[2658]: I0702 06:57:53.433472 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-host-proc-sys-net\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434148 kubelet[2658]: I0702 06:57:53.433513 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bwcf\" (UniqueName: \"kubernetes.io/projected/f8b54788-00d3-4831-a3cd-bde068fc8a41-kube-api-access-7bwcf\") pod \"f8b54788-00d3-4831-a3cd-bde068fc8a41\" (UID: \"f8b54788-00d3-4831-a3cd-bde068fc8a41\") " Jul 2 06:57:53.434567 kubelet[2658]: I0702 06:57:53.433549 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-clustermesh-secrets\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434567 kubelet[2658]: I0702 06:57:53.433576 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-lib-modules\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434567 kubelet[2658]: I0702 06:57:53.433607 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8b54788-00d3-4831-a3cd-bde068fc8a41-cilium-config-path\") pod \"f8b54788-00d3-4831-a3cd-bde068fc8a41\" (UID: \"f8b54788-00d3-4831-a3cd-bde068fc8a41\") " Jul 2 06:57:53.434567 kubelet[2658]: I0702 06:57:53.433642 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-etc-cni-netd\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434567 kubelet[2658]: I0702 06:57:53.433678 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-config-path\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.434567 kubelet[2658]: I0702 06:57:53.433916 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-hubble-tls\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.436289 kubelet[2658]: I0702 06:57:53.433956 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-xtables-lock\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.436289 kubelet[2658]: I0702 06:57:53.434020 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cni-path\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.436289 kubelet[2658]: I0702 06:57:53.434087 2658 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-hostproc\") pod \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\" (UID: \"44c8ea2a-e670-493e-8fd9-17e6469dd5c4\") " Jul 2 06:57:53.441937 kubelet[2658]: I0702 06:57:53.441143 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.442975 kubelet[2658]: I0702 06:57:53.439977 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.454107 kubelet[2658]: I0702 06:57:53.453783 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.454107 kubelet[2658]: I0702 06:57:53.453859 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.454107 kubelet[2658]: I0702 06:57:53.453876 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.455895 kubelet[2658]: I0702 06:57:53.454009 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.455895 kubelet[2658]: I0702 06:57:53.454227 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.455895 kubelet[2658]: I0702 06:57:53.454283 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.459298 kubelet[2658]: I0702 06:57:53.459251 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.459398 kubelet[2658]: I0702 06:57:53.459315 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.462952 kubelet[2658]: I0702 06:57:53.462593 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 06:57:53.467348 kubelet[2658]: I0702 06:57:53.467314 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 06:57:53.468747 kubelet[2658]: I0702 06:57:53.468656 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 06:57:53.468747 kubelet[2658]: I0702 06:57:53.468688 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-kube-api-access-bmnxx" (OuterVolumeSpecName: "kube-api-access-bmnxx") pod "44c8ea2a-e670-493e-8fd9-17e6469dd5c4" (UID: "44c8ea2a-e670-493e-8fd9-17e6469dd5c4"). InnerVolumeSpecName "kube-api-access-bmnxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 06:57:53.469934 kubelet[2658]: I0702 06:57:53.469904 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b54788-00d3-4831-a3cd-bde068fc8a41-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f8b54788-00d3-4831-a3cd-bde068fc8a41" (UID: "f8b54788-00d3-4831-a3cd-bde068fc8a41"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 06:57:53.470025 kubelet[2658]: I0702 06:57:53.469984 2658 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b54788-00d3-4831-a3cd-bde068fc8a41-kube-api-access-7bwcf" (OuterVolumeSpecName: "kube-api-access-7bwcf") pod "f8b54788-00d3-4831-a3cd-bde068fc8a41" (UID: "f8b54788-00d3-4831-a3cd-bde068fc8a41"). InnerVolumeSpecName "kube-api-access-7bwcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 06:57:53.535634 kubelet[2658]: I0702 06:57:53.535530 2658 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-etc-cni-netd\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.535634 kubelet[2658]: I0702 06:57:53.535606 2658 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-config-path\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.535634 kubelet[2658]: I0702 06:57:53.535628 2658 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-xtables-lock\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.535634 kubelet[2658]: I0702 06:57:53.535646 2658 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cni-path\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536013 kubelet[2658]: I0702 06:57:53.535667 2658 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-hostproc\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536013 kubelet[2658]: I0702 06:57:53.535684 2658 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-hubble-tls\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536013 kubelet[2658]: I0702 06:57:53.535702 2658 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bmnxx\" (UniqueName: \"kubernetes.io/projected/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-kube-api-access-bmnxx\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536013 kubelet[2658]: I0702 06:57:53.535731 2658 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-cgroup\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536013 kubelet[2658]: I0702 06:57:53.535748 2658 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-bpf-maps\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536013 kubelet[2658]: I0702 06:57:53.535765 2658 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-host-proc-sys-kernel\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536013 kubelet[2658]: I0702 06:57:53.535783 2658 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-host-proc-sys-net\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536013 kubelet[2658]: I0702 06:57:53.535803 2658 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-cilium-run\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536411 kubelet[2658]: I0702 06:57:53.535821 2658 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7bwcf\" (UniqueName: \"kubernetes.io/projected/f8b54788-00d3-4831-a3cd-bde068fc8a41-kube-api-access-7bwcf\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536411 kubelet[2658]: I0702 06:57:53.535841 2658 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-clustermesh-secrets\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536411 kubelet[2658]: I0702 06:57:53.535858 2658 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44c8ea2a-e670-493e-8fd9-17e6469dd5c4-lib-modules\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.536411 kubelet[2658]: I0702 06:57:53.535877 2658 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8b54788-00d3-4831-a3cd-bde068fc8a41-cilium-config-path\") on node \"srv-5ya4d.gb1.brightbox.com\" DevicePath \"\"" Jul 2 06:57:53.789954 systemd[1]: Removed slice kubepods-besteffort-podf8b54788_00d3_4831_a3cd_bde068fc8a41.slice - libcontainer container kubepods-besteffort-podf8b54788_00d3_4831_a3cd_bde068fc8a41.slice. Jul 2 06:57:53.794139 kubelet[2658]: I0702 06:57:53.794087 2658 scope.go:117] "RemoveContainer" containerID="4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e" Jul 2 06:57:53.801503 containerd[1497]: time="2024-07-02T06:57:53.801456770Z" level=info msg="RemoveContainer for \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\"" Jul 2 06:57:53.819460 containerd[1497]: time="2024-07-02T06:57:53.818713380Z" level=info msg="RemoveContainer for \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\" returns successfully" Jul 2 06:57:53.821737 kubelet[2658]: I0702 06:57:53.821703 2658 scope.go:117] "RemoveContainer" containerID="4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e" Jul 2 06:57:53.822023 containerd[1497]: time="2024-07-02T06:57:53.821979632Z" level=error msg="ContainerStatus for \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\": not found" Jul 2 06:57:53.826977 systemd[1]: Removed slice kubepods-burstable-pod44c8ea2a_e670_493e_8fd9_17e6469dd5c4.slice - libcontainer container kubepods-burstable-pod44c8ea2a_e670_493e_8fd9_17e6469dd5c4.slice. Jul 2 06:57:53.827332 systemd[1]: kubepods-burstable-pod44c8ea2a_e670_493e_8fd9_17e6469dd5c4.slice: Consumed 10.426s CPU time. Jul 2 06:57:53.841538 kubelet[2658]: E0702 06:57:53.841492 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\": not found" containerID="4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e" Jul 2 06:57:53.843280 kubelet[2658]: I0702 06:57:53.843211 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e"} err="failed to get container status \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a9e79950ef8d0841b2133004e2dd0709e069d3104d655df6a3855356de8b92e\": not found" Jul 2 06:57:53.843280 kubelet[2658]: I0702 06:57:53.843276 2658 scope.go:117] "RemoveContainer" containerID="df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1" Jul 2 06:57:53.848034 containerd[1497]: time="2024-07-02T06:57:53.847857819Z" level=info msg="RemoveContainer for \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\"" Jul 2 06:57:53.854286 containerd[1497]: time="2024-07-02T06:57:53.854160742Z" level=info msg="RemoveContainer for \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\" returns successfully" Jul 2 06:57:53.854640 kubelet[2658]: I0702 06:57:53.854545 2658 scope.go:117] "RemoveContainer" containerID="0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092" Jul 2 06:57:53.857534 containerd[1497]: time="2024-07-02T06:57:53.857174664Z" level=info msg="RemoveContainer for \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\"" Jul 2 06:57:53.860368 containerd[1497]: time="2024-07-02T06:57:53.860334419Z" level=info msg="RemoveContainer for \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\" returns successfully" Jul 2 06:57:53.860958 kubelet[2658]: I0702 06:57:53.860693 2658 scope.go:117] "RemoveContainer" containerID="8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b" Jul 2 06:57:53.863231 containerd[1497]: time="2024-07-02T06:57:53.862861938Z" level=info msg="RemoveContainer for \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\"" Jul 2 06:57:53.867583 containerd[1497]: time="2024-07-02T06:57:53.867551200Z" level=info msg="RemoveContainer for \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\" returns successfully" Jul 2 06:57:53.868047 kubelet[2658]: I0702 06:57:53.868025 2658 scope.go:117] "RemoveContainer" containerID="50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7" Jul 2 06:57:53.870605 containerd[1497]: time="2024-07-02T06:57:53.870144077Z" level=info msg="RemoveContainer for \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\"" Jul 2 06:57:53.874284 containerd[1497]: time="2024-07-02T06:57:53.874250481Z" level=info msg="RemoveContainer for \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\" returns successfully" Jul 2 06:57:53.874664 kubelet[2658]: I0702 06:57:53.874553 2658 scope.go:117] "RemoveContainer" containerID="d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575" Jul 2 06:57:53.876501 containerd[1497]: time="2024-07-02T06:57:53.876016059Z" level=info msg="RemoveContainer for \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\"" Jul 2 06:57:53.878840 containerd[1497]: time="2024-07-02T06:57:53.878808330Z" level=info msg="RemoveContainer for \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\" returns successfully" Jul 2 06:57:53.879413 kubelet[2658]: I0702 06:57:53.879125 2658 scope.go:117] "RemoveContainer" containerID="df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1" Jul 2 06:57:53.879532 containerd[1497]: time="2024-07-02T06:57:53.879342603Z" level=error msg="ContainerStatus for \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\": not found" Jul 2 06:57:53.879973 kubelet[2658]: E0702 06:57:53.879849 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\": not found" containerID="df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1" Jul 2 06:57:53.879973 kubelet[2658]: I0702 06:57:53.879894 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1"} err="failed to get container status \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\": rpc error: code = NotFound desc = an error occurred when try to find container \"df13861250e11cf10c39dc6305996cedee477fc20a065930b88d6a3d695bcec1\": not found" Jul 2 06:57:53.879973 kubelet[2658]: I0702 06:57:53.879911 2658 scope.go:117] "RemoveContainer" containerID="0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092" Jul 2 06:57:53.880793 containerd[1497]: time="2024-07-02T06:57:53.880362674Z" level=error msg="ContainerStatus for \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\": not found" Jul 2 06:57:53.880876 kubelet[2658]: E0702 06:57:53.880526 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\": not found" containerID="0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092" Jul 2 06:57:53.880876 kubelet[2658]: I0702 06:57:53.880598 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092"} err="failed to get container status \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f0680845548bb495f61c4dd66e0d0dcd8dacfc25500f7938a2d73471fd17092\": not found" Jul 2 06:57:53.880876 kubelet[2658]: I0702 06:57:53.880618 2658 scope.go:117] "RemoveContainer" containerID="8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b" Jul 2 06:57:53.881653 containerd[1497]: time="2024-07-02T06:57:53.881285896Z" level=error msg="ContainerStatus for \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\": not found" Jul 2 06:57:53.881735 kubelet[2658]: E0702 06:57:53.881521 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\": not found" containerID="8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b" Jul 2 06:57:53.881735 kubelet[2658]: I0702 06:57:53.881555 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b"} err="failed to get container status \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a77237f43ac4cb3e0b1509c648209a9e7685a9bab9e51528df0fde63d6f7b6b\": not found" Jul 2 06:57:53.881735 kubelet[2658]: I0702 06:57:53.881570 2658 scope.go:117] "RemoveContainer" containerID="50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7" Jul 2 06:57:53.882404 containerd[1497]: time="2024-07-02T06:57:53.882021818Z" level=error msg="ContainerStatus for \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\": not found" Jul 2 06:57:53.882514 kubelet[2658]: E0702 06:57:53.882347 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\": not found" containerID="50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7" Jul 2 06:57:53.882863 kubelet[2658]: I0702 06:57:53.882590 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7"} err="failed to get container status \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"50ab3fd901f5788e5206d74768126a72bed029b34f1229eef517e095498bc2e7\": not found" Jul 2 06:57:53.882863 kubelet[2658]: I0702 06:57:53.882617 2658 scope.go:117] "RemoveContainer" containerID="d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575" Jul 2 06:57:53.883363 containerd[1497]: time="2024-07-02T06:57:53.883261278Z" level=error msg="ContainerStatus for \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\": not found" Jul 2 06:57:53.883671 kubelet[2658]: E0702 06:57:53.883615 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\": not found" containerID="d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575" Jul 2 06:57:53.883671 kubelet[2658]: I0702 06:57:53.883652 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575"} err="failed to get container status \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\": rpc error: code = NotFound desc = an error occurred when try to find container \"d141cfab2a0a5cb3f06db93bd16da3b535e472c32492e7ace2b7e68bb460a575\": not found" Jul 2 06:57:53.926111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c46e5c1a1a402478f0d3c6f28118ee96353e5ea72ab38b4047c98321956b5a9-rootfs.mount: Deactivated successfully. Jul 2 06:57:53.926273 systemd[1]: var-lib-kubelet-pods-f8b54788\x2d00d3\x2d4831\x2da3cd\x2dbde068fc8a41-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7bwcf.mount: Deactivated successfully. Jul 2 06:57:53.926395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b0107101cdcade1bcc7b9eeedff4072ff4bcaa7b569b7d9493edee957e8e11e-rootfs.mount: Deactivated successfully. Jul 2 06:57:53.926545 systemd[1]: var-lib-kubelet-pods-44c8ea2a\x2de670\x2d493e\x2d8fd9\x2d17e6469dd5c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbmnxx.mount: Deactivated successfully. Jul 2 06:57:53.926658 systemd[1]: var-lib-kubelet-pods-44c8ea2a\x2de670\x2d493e\x2d8fd9\x2d17e6469dd5c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 06:57:53.926759 systemd[1]: var-lib-kubelet-pods-44c8ea2a\x2de670\x2d493e\x2d8fd9\x2d17e6469dd5c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 06:57:54.469309 kubelet[2658]: E0702 06:57:54.469059 2658 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 06:57:54.891861 sshd[4259]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:54.896190 systemd[1]: sshd@24-10.244.24.146:22-139.178.89.65:56502.service: Deactivated successfully. Jul 2 06:57:54.898770 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 06:57:54.900899 systemd-logind[1479]: Session 27 logged out. Waiting for processes to exit. Jul 2 06:57:54.902520 systemd-logind[1479]: Removed session 27. Jul 2 06:57:55.049761 systemd[1]: Started sshd@25-10.244.24.146:22-139.178.89.65:56516.service - OpenSSH per-connection server daemon (139.178.89.65:56516). Jul 2 06:57:55.195458 kubelet[2658]: I0702 06:57:55.194615 2658 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="44c8ea2a-e670-493e-8fd9-17e6469dd5c4" path="/var/lib/kubelet/pods/44c8ea2a-e670-493e-8fd9-17e6469dd5c4/volumes" Jul 2 06:57:55.196088 kubelet[2658]: I0702 06:57:55.196065 2658 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f8b54788-00d3-4831-a3cd-bde068fc8a41" path="/var/lib/kubelet/pods/f8b54788-00d3-4831-a3cd-bde068fc8a41/volumes" Jul 2 06:57:55.948305 sshd[4424]: Accepted publickey for core from 139.178.89.65 port 56516 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:55.950394 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:55.957478 systemd-logind[1479]: New session 28 of user core. Jul 2 06:57:55.964665 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 06:57:56.941442 kubelet[2658]: I0702 06:57:56.941370 2658 topology_manager.go:215] "Topology Admit Handler" podUID="c810f355-61d5-473c-9e6e-501f5111b174" podNamespace="kube-system" podName="cilium-wqvs8" Jul 2 06:57:56.942215 kubelet[2658]: E0702 06:57:56.942187 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44c8ea2a-e670-493e-8fd9-17e6469dd5c4" containerName="apply-sysctl-overwrites" Jul 2 06:57:56.942327 kubelet[2658]: E0702 06:57:56.942227 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44c8ea2a-e670-493e-8fd9-17e6469dd5c4" containerName="mount-bpf-fs" Jul 2 06:57:56.942327 kubelet[2658]: E0702 06:57:56.942243 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44c8ea2a-e670-493e-8fd9-17e6469dd5c4" containerName="clean-cilium-state" Jul 2 06:57:56.942327 kubelet[2658]: E0702 06:57:56.942256 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44c8ea2a-e670-493e-8fd9-17e6469dd5c4" containerName="cilium-agent" Jul 2 06:57:56.942327 kubelet[2658]: E0702 06:57:56.942286 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8b54788-00d3-4831-a3cd-bde068fc8a41" containerName="cilium-operator" Jul 2 06:57:56.942327 kubelet[2658]: E0702 06:57:56.942302 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44c8ea2a-e670-493e-8fd9-17e6469dd5c4" containerName="mount-cgroup" Jul 2 06:57:56.942576 kubelet[2658]: I0702 06:57:56.942382 2658 memory_manager.go:346] "RemoveStaleState removing state" podUID="44c8ea2a-e670-493e-8fd9-17e6469dd5c4" containerName="cilium-agent" Jul 2 06:57:56.942576 kubelet[2658]: I0702 06:57:56.942399 2658 memory_manager.go:346] "RemoveStaleState removing state" podUID="f8b54788-00d3-4831-a3cd-bde068fc8a41" containerName="cilium-operator" Jul 2 06:57:56.982541 systemd[1]: Created slice kubepods-burstable-podc810f355_61d5_473c_9e6e_501f5111b174.slice - libcontainer container kubepods-burstable-podc810f355_61d5_473c_9e6e_501f5111b174.slice. Jul 2 06:57:57.034218 sshd[4424]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:57.039532 systemd[1]: sshd@25-10.244.24.146:22-139.178.89.65:56516.service: Deactivated successfully. Jul 2 06:57:57.043352 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 06:57:57.047506 systemd-logind[1479]: Session 28 logged out. Waiting for processes to exit. Jul 2 06:57:57.049318 systemd-logind[1479]: Removed session 28. Jul 2 06:57:57.070695 kubelet[2658]: I0702 06:57:57.070601 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-bpf-maps\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074525 kubelet[2658]: I0702 06:57:57.073731 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-lib-modules\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074525 kubelet[2658]: I0702 06:57:57.073802 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-host-proc-sys-net\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074525 kubelet[2658]: I0702 06:57:57.073890 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c810f355-61d5-473c-9e6e-501f5111b174-clustermesh-secrets\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074525 kubelet[2658]: I0702 06:57:57.073936 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c810f355-61d5-473c-9e6e-501f5111b174-hubble-tls\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074525 kubelet[2658]: I0702 06:57:57.074008 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-cilium-cgroup\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074525 kubelet[2658]: I0702 06:57:57.074048 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c810f355-61d5-473c-9e6e-501f5111b174-cilium-ipsec-secrets\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074809 kubelet[2658]: I0702 06:57:57.074085 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-cilium-run\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074809 kubelet[2658]: I0702 06:57:57.074116 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-xtables-lock\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074809 kubelet[2658]: I0702 06:57:57.074152 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c810f355-61d5-473c-9e6e-501f5111b174-cilium-config-path\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074809 kubelet[2658]: I0702 06:57:57.074185 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-cni-path\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074809 kubelet[2658]: I0702 06:57:57.074217 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-etc-cni-netd\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.074809 kubelet[2658]: I0702 06:57:57.074253 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-host-proc-sys-kernel\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.075109 kubelet[2658]: I0702 06:57:57.074299 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7l6\" (UniqueName: \"kubernetes.io/projected/c810f355-61d5-473c-9e6e-501f5111b174-kube-api-access-qp7l6\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.075109 kubelet[2658]: I0702 06:57:57.074336 2658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c810f355-61d5-473c-9e6e-501f5111b174-hostproc\") pod \"cilium-wqvs8\" (UID: \"c810f355-61d5-473c-9e6e-501f5111b174\") " pod="kube-system/cilium-wqvs8" Jul 2 06:57:57.208953 systemd[1]: Started sshd@26-10.244.24.146:22-139.178.89.65:56532.service - OpenSSH per-connection server daemon (139.178.89.65:56532). Jul 2 06:57:57.293970 containerd[1497]: time="2024-07-02T06:57:57.293899307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqvs8,Uid:c810f355-61d5-473c-9e6e-501f5111b174,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:57.325110 containerd[1497]: time="2024-07-02T06:57:57.324953191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:57.325494 containerd[1497]: time="2024-07-02T06:57:57.325045577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:57.325494 containerd[1497]: time="2024-07-02T06:57:57.325229025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:57.325494 containerd[1497]: time="2024-07-02T06:57:57.325254636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:57.357651 systemd[1]: Started cri-containerd-409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076.scope - libcontainer container 409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076. Jul 2 06:57:57.390407 containerd[1497]: time="2024-07-02T06:57:57.390242449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqvs8,Uid:c810f355-61d5-473c-9e6e-501f5111b174,Namespace:kube-system,Attempt:0,} returns sandbox id \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\"" Jul 2 06:57:57.398470 containerd[1497]: time="2024-07-02T06:57:57.397987179Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 06:57:57.412895 containerd[1497]: time="2024-07-02T06:57:57.412809667Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c06b0f703b2b188fa902ed6fdc69d1ba35e9e8f15a0069a18e87e20feac55137\"" Jul 2 06:57:57.415465 containerd[1497]: time="2024-07-02T06:57:57.414785818Z" level=info msg="StartContainer for \"c06b0f703b2b188fa902ed6fdc69d1ba35e9e8f15a0069a18e87e20feac55137\"" Jul 2 06:57:57.452637 systemd[1]: Started cri-containerd-c06b0f703b2b188fa902ed6fdc69d1ba35e9e8f15a0069a18e87e20feac55137.scope - libcontainer container c06b0f703b2b188fa902ed6fdc69d1ba35e9e8f15a0069a18e87e20feac55137. Jul 2 06:57:57.495335 containerd[1497]: time="2024-07-02T06:57:57.495158025Z" level=info msg="StartContainer for \"c06b0f703b2b188fa902ed6fdc69d1ba35e9e8f15a0069a18e87e20feac55137\" returns successfully" Jul 2 06:57:57.514923 systemd[1]: cri-containerd-c06b0f703b2b188fa902ed6fdc69d1ba35e9e8f15a0069a18e87e20feac55137.scope: Deactivated successfully. Jul 2 06:57:57.558526 containerd[1497]: time="2024-07-02T06:57:57.558190890Z" level=info msg="shim disconnected" id=c06b0f703b2b188fa902ed6fdc69d1ba35e9e8f15a0069a18e87e20feac55137 namespace=k8s.io Jul 2 06:57:57.558526 containerd[1497]: time="2024-07-02T06:57:57.558263506Z" level=warning msg="cleaning up after shim disconnected" id=c06b0f703b2b188fa902ed6fdc69d1ba35e9e8f15a0069a18e87e20feac55137 namespace=k8s.io Jul 2 06:57:57.558526 containerd[1497]: time="2024-07-02T06:57:57.558295003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:57.845346 containerd[1497]: time="2024-07-02T06:57:57.845037842Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 06:57:57.857467 containerd[1497]: time="2024-07-02T06:57:57.857155575Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81724c58022affefedce30e36e81e11708e8073020d48a6bb4f16c2b85ac3c8b\"" Jul 2 06:57:57.858378 containerd[1497]: time="2024-07-02T06:57:57.858341774Z" level=info msg="StartContainer for \"81724c58022affefedce30e36e81e11708e8073020d48a6bb4f16c2b85ac3c8b\"" Jul 2 06:57:57.899642 systemd[1]: Started cri-containerd-81724c58022affefedce30e36e81e11708e8073020d48a6bb4f16c2b85ac3c8b.scope - libcontainer container 81724c58022affefedce30e36e81e11708e8073020d48a6bb4f16c2b85ac3c8b. Jul 2 06:57:57.935977 containerd[1497]: time="2024-07-02T06:57:57.935920765Z" level=info msg="StartContainer for \"81724c58022affefedce30e36e81e11708e8073020d48a6bb4f16c2b85ac3c8b\" returns successfully" Jul 2 06:57:57.949948 systemd[1]: cri-containerd-81724c58022affefedce30e36e81e11708e8073020d48a6bb4f16c2b85ac3c8b.scope: Deactivated successfully. Jul 2 06:57:57.978849 containerd[1497]: time="2024-07-02T06:57:57.978755331Z" level=info msg="shim disconnected" id=81724c58022affefedce30e36e81e11708e8073020d48a6bb4f16c2b85ac3c8b namespace=k8s.io Jul 2 06:57:57.978849 containerd[1497]: time="2024-07-02T06:57:57.978848328Z" level=warning msg="cleaning up after shim disconnected" id=81724c58022affefedce30e36e81e11708e8073020d48a6bb4f16c2b85ac3c8b namespace=k8s.io Jul 2 06:57:57.978849 containerd[1497]: time="2024-07-02T06:57:57.978865379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:58.091897 sshd[4440]: Accepted publickey for core from 139.178.89.65 port 56532 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:58.093999 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:58.101275 systemd-logind[1479]: New session 29 of user core. Jul 2 06:57:58.106885 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 06:57:58.694893 sshd[4440]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:58.699233 systemd-logind[1479]: Session 29 logged out. Waiting for processes to exit. Jul 2 06:57:58.700387 systemd[1]: sshd@26-10.244.24.146:22-139.178.89.65:56532.service: Deactivated successfully. Jul 2 06:57:58.703852 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 06:57:58.706349 systemd-logind[1479]: Removed session 29. Jul 2 06:57:58.857957 systemd[1]: Started sshd@27-10.244.24.146:22-139.178.89.65:38242.service - OpenSSH per-connection server daemon (139.178.89.65:38242). Jul 2 06:57:58.869271 containerd[1497]: time="2024-07-02T06:57:58.869015032Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 06:57:58.905953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090086596.mount: Deactivated successfully. Jul 2 06:57:58.912498 containerd[1497]: time="2024-07-02T06:57:58.912450655Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1\"" Jul 2 06:57:58.915466 containerd[1497]: time="2024-07-02T06:57:58.914413435Z" level=info msg="StartContainer for \"4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1\"" Jul 2 06:57:58.955646 systemd[1]: Started cri-containerd-4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1.scope - libcontainer container 4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1. Jul 2 06:57:58.998980 containerd[1497]: time="2024-07-02T06:57:58.998900411Z" level=info msg="StartContainer for \"4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1\" returns successfully" Jul 2 06:57:59.008651 systemd[1]: cri-containerd-4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1.scope: Deactivated successfully. Jul 2 06:57:59.042418 containerd[1497]: time="2024-07-02T06:57:59.042247652Z" level=info msg="shim disconnected" id=4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1 namespace=k8s.io Jul 2 06:57:59.042418 containerd[1497]: time="2024-07-02T06:57:59.042409161Z" level=warning msg="cleaning up after shim disconnected" id=4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1 namespace=k8s.io Jul 2 06:57:59.042729 containerd[1497]: time="2024-07-02T06:57:59.042469231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:59.188981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bda8b96c9f651e190fb2d98d2d3befe3b26d9a8618715f48899af66c00763e1-rootfs.mount: Deactivated successfully. Jul 2 06:57:59.471716 kubelet[2658]: E0702 06:57:59.471593 2658 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 06:57:59.762890 sshd[4613]: Accepted publickey for core from 139.178.89.65 port 38242 ssh2: RSA SHA256:UZYYGxahQSuaJ4Go9BMFXc5O2kGoWTMSkKIILUYSRzM Jul 2 06:57:59.764905 sshd[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:59.772368 systemd-logind[1479]: New session 30 of user core. Jul 2 06:57:59.781626 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 06:57:59.859725 containerd[1497]: time="2024-07-02T06:57:59.859663232Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 06:57:59.876237 containerd[1497]: time="2024-07-02T06:57:59.876120013Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911\"" Jul 2 06:57:59.877921 containerd[1497]: time="2024-07-02T06:57:59.876876935Z" level=info msg="StartContainer for \"1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911\"" Jul 2 06:57:59.927642 systemd[1]: Started cri-containerd-1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911.scope - libcontainer container 1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911. Jul 2 06:57:59.964131 systemd[1]: cri-containerd-1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911.scope: Deactivated successfully. Jul 2 06:57:59.969242 containerd[1497]: time="2024-07-02T06:57:59.969028260Z" level=info msg="StartContainer for \"1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911\" returns successfully" Jul 2 06:58:00.009497 containerd[1497]: time="2024-07-02T06:58:00.008715776Z" level=info msg="shim disconnected" id=1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911 namespace=k8s.io Jul 2 06:58:00.009497 containerd[1497]: time="2024-07-02T06:58:00.008789912Z" level=warning msg="cleaning up after shim disconnected" id=1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911 namespace=k8s.io Jul 2 06:58:00.009497 containerd[1497]: time="2024-07-02T06:58:00.008805907Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:58:00.188974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1886a38b75ac49390daef4fc9df68216a983adee0093ef37dab56f8937e78911-rootfs.mount: Deactivated successfully. Jul 2 06:58:00.858250 containerd[1497]: time="2024-07-02T06:58:00.857970021Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 06:58:00.884723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224372492.mount: Deactivated successfully. Jul 2 06:58:00.892318 containerd[1497]: time="2024-07-02T06:58:00.892238450Z" level=info msg="CreateContainer within sandbox \"409565854037960c632daa18e9de5fa22a7d1f4add5c3f174388e853cf05e076\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cd0bf6011e2636298b3fe8373a4b213b3ca8549f18f8224fa22c2186adfae63c\"" Jul 2 06:58:00.895019 containerd[1497]: time="2024-07-02T06:58:00.892921667Z" level=info msg="StartContainer for \"cd0bf6011e2636298b3fe8373a4b213b3ca8549f18f8224fa22c2186adfae63c\"" Jul 2 06:58:00.942942 systemd[1]: Started cri-containerd-cd0bf6011e2636298b3fe8373a4b213b3ca8549f18f8224fa22c2186adfae63c.scope - libcontainer container cd0bf6011e2636298b3fe8373a4b213b3ca8549f18f8224fa22c2186adfae63c. Jul 2 06:58:00.985346 containerd[1497]: time="2024-07-02T06:58:00.985265400Z" level=info msg="StartContainer for \"cd0bf6011e2636298b3fe8373a4b213b3ca8549f18f8224fa22c2186adfae63c\" returns successfully" Jul 2 06:58:01.715485 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 06:58:02.587132 kubelet[2658]: I0702 06:58:02.587093 2658 setters.go:552] "Node became not ready" node="srv-5ya4d.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T06:58:02Z","lastTransitionTime":"2024-07-02T06:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 06:58:04.903847 systemd[1]: run-containerd-runc-k8s.io-cd0bf6011e2636298b3fe8373a4b213b3ca8549f18f8224fa22c2186adfae63c-runc.bu4MeD.mount: Deactivated successfully. Jul 2 06:58:05.401501 systemd-networkd[1415]: lxc_health: Link UP Jul 2 06:58:05.410486 systemd-networkd[1415]: lxc_health: Gained carrier Jul 2 06:58:07.207785 systemd-networkd[1415]: lxc_health: Gained IPv6LL Jul 2 06:58:07.363013 kubelet[2658]: I0702 06:58:07.361791 2658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wqvs8" podStartSLOduration=11.361724541 podCreationTimestamp="2024-07-02 06:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:58:01.88937313 +0000 UTC m=+152.928139759" watchObservedRunningTime="2024-07-02 06:58:07.361724541 +0000 UTC m=+158.400491166" Jul 2 06:58:12.156473 sshd[4613]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:12.166149 systemd[1]: sshd@27-10.244.24.146:22-139.178.89.65:38242.service: Deactivated successfully. Jul 2 06:58:12.166306 systemd-logind[1479]: Session 30 logged out. Waiting for processes to exit. Jul 2 06:58:12.172106 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 06:58:12.176343 systemd-logind[1479]: Removed session 30.