Jul 2 00:23:05.054176 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:23:05.054201 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:05.054215 kernel: BIOS-provided physical RAM map: Jul 2 00:23:05.054242 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:23:05.054251 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:23:05.054259 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:23:05.054321 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 2 00:23:05.054351 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 2 00:23:05.054360 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:23:05.054371 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:23:05.054380 kernel: NX (Execute Disable) protection: active Jul 2 00:23:05.054388 kernel: APIC: Static calls initialized Jul 2 00:23:05.054396 kernel: SMBIOS 2.8 present. Jul 2 00:23:05.054405 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jul 2 00:23:05.054416 kernel: Hypervisor detected: KVM Jul 2 00:23:05.054429 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:23:05.054443 kernel: kvm-clock: using sched offset of 3971031958 cycles Jul 2 00:23:05.054456 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:23:05.054470 kernel: tsc: Detected 1996.249 MHz processor Jul 2 00:23:05.054480 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:23:05.054490 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:23:05.054499 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 2 00:23:05.054509 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:23:05.054518 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:23:05.054530 kernel: ACPI: Early table checksum verification disabled Jul 2 00:23:05.054539 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jul 2 00:23:05.054548 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:05.054558 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:05.054567 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:05.054576 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 00:23:05.054585 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:05.054594 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:23:05.054603 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jul 2 00:23:05.054615 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jul 2 00:23:05.054624 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 00:23:05.054633 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jul 2 00:23:05.054642 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jul 2 00:23:05.054651 kernel: No NUMA configuration found Jul 2 00:23:05.054660 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jul 2 00:23:05.054670 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jul 2 00:23:05.054683 kernel: Zone ranges: Jul 2 00:23:05.054694 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:23:05.054704 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jul 2 00:23:05.054713 kernel: Normal empty Jul 2 00:23:05.054723 kernel: Movable zone start for each node Jul 2 00:23:05.054732 kernel: Early memory node ranges Jul 2 00:23:05.054742 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:23:05.054752 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 2 00:23:05.054763 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jul 2 00:23:05.054773 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:23:05.054782 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:23:05.054792 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jul 2 00:23:05.054801 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:23:05.054811 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:23:05.054820 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:23:05.054830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:23:05.054840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:23:05.054851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:23:05.054861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:23:05.054871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:23:05.054880 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:23:05.054890 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:23:05.054899 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:23:05.054909 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 00:23:05.054918 kernel: Booting paravirtualized kernel on KVM Jul 2 00:23:05.054928 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:23:05.054941 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:23:05.054967 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:23:05.054980 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:23:05.054991 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:23:05.055001 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 00:23:05.055012 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:05.055022 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:23:05.055032 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:23:05.055045 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:23:05.055199 kernel: Fallback order for Node 0: 0 Jul 2 00:23:05.055209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jul 2 00:23:05.055218 kernel: Policy zone: DMA32 Jul 2 00:23:05.055228 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:23:05.055238 kernel: Memory: 1965068K/2096620K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131292K reserved, 0K cma-reserved) Jul 2 00:23:05.055248 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:23:05.055258 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:23:05.055267 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:23:05.055280 kernel: Dynamic Preempt: voluntary Jul 2 00:23:05.055290 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:23:05.055302 kernel: rcu: RCU event tracing is enabled. Jul 2 00:23:05.055312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:23:05.055322 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:23:05.055332 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:23:05.055341 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:23:05.055350 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:23:05.055360 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:23:05.055372 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:23:05.055381 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:23:05.055391 kernel: Console: colour VGA+ 80x25 Jul 2 00:23:05.055400 kernel: printk: console [tty0] enabled Jul 2 00:23:05.055410 kernel: printk: console [ttyS0] enabled Jul 2 00:23:05.055419 kernel: ACPI: Core revision 20230628 Jul 2 00:23:05.055429 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:23:05.055438 kernel: x2apic enabled Jul 2 00:23:05.055448 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:23:05.055460 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:23:05.055469 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:23:05.055479 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 2 00:23:05.055488 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 00:23:05.055498 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 00:23:05.055509 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:23:05.055523 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:23:05.055537 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:23:05.055552 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:23:05.055566 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:23:05.055575 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 2 00:23:05.055585 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:23:05.055595 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:23:05.055604 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:23:05.055614 kernel: SELinux: Initializing. Jul 2 00:23:05.055623 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:23:05.055633 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:23:05.055651 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 2 00:23:05.055662 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:05.055672 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:05.055682 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:05.055695 kernel: Performance Events: AMD PMU driver. Jul 2 00:23:05.055705 kernel: ... version: 0 Jul 2 00:23:05.055715 kernel: ... bit width: 48 Jul 2 00:23:05.055725 kernel: ... generic registers: 4 Jul 2 00:23:05.055735 kernel: ... value mask: 0000ffffffffffff Jul 2 00:23:05.055748 kernel: ... max period: 00007fffffffffff Jul 2 00:23:05.055758 kernel: ... fixed-purpose events: 0 Jul 2 00:23:05.055768 kernel: ... event mask: 000000000000000f Jul 2 00:23:05.055778 kernel: signal: max sigframe size: 1440 Jul 2 00:23:05.055788 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:23:05.055798 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:23:05.055808 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:23:05.055830 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:23:05.055863 kernel: .... node #0, CPUs: #1 Jul 2 00:23:05.055888 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:23:05.055925 kernel: smpboot: Max logical packages: 2 Jul 2 00:23:05.055958 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 2 00:23:05.055990 kernel: devtmpfs: initialized Jul 2 00:23:05.056005 kernel: x86/mm: Memory block size: 128MB Jul 2 00:23:05.056015 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:23:05.056026 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:23:05.056041 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:23:05.056075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:23:05.056090 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:23:05.056101 kernel: audit: type=2000 audit(1719879783.862:1): state=initialized audit_enabled=0 res=1 Jul 2 00:23:05.056111 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:23:05.056121 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:23:05.056131 kernel: cpuidle: using governor menu Jul 2 00:23:05.056141 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:23:05.056151 kernel: dca service started, version 1.12.1 Jul 2 00:23:05.056162 kernel: PCI: Using configuration type 1 for base access Jul 2 00:23:05.056172 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:23:05.056199 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:23:05.056222 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:23:05.056232 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:23:05.056243 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:23:05.056253 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:23:05.056263 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:23:05.056273 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:23:05.056283 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:23:05.056293 kernel: ACPI: Interpreter enabled Jul 2 00:23:05.056303 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:23:05.056316 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:23:05.056326 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:23:05.056336 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:23:05.056346 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:23:05.056357 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:23:05.056500 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:23:05.056611 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:23:05.056729 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:23:05.056745 kernel: acpiphp: Slot [3] registered Jul 2 00:23:05.056755 kernel: acpiphp: Slot [4] registered Jul 2 00:23:05.056766 kernel: acpiphp: Slot [5] registered Jul 2 00:23:05.056776 kernel: acpiphp: Slot [6] registered Jul 2 00:23:05.056786 kernel: acpiphp: Slot [7] registered Jul 2 00:23:05.056796 kernel: acpiphp: Slot [8] registered Jul 2 00:23:05.056806 kernel: acpiphp: Slot [9] registered Jul 2 00:23:05.056816 kernel: acpiphp: Slot [10] registered Jul 2 00:23:05.056830 kernel: acpiphp: Slot [11] registered Jul 2 00:23:05.056840 kernel: acpiphp: Slot [12] registered Jul 2 00:23:05.056850 kernel: acpiphp: Slot [13] registered Jul 2 00:23:05.056860 kernel: acpiphp: Slot [14] registered Jul 2 00:23:05.056870 kernel: acpiphp: Slot [15] registered Jul 2 00:23:05.056880 kernel: acpiphp: Slot [16] registered Jul 2 00:23:05.056890 kernel: acpiphp: Slot [17] registered Jul 2 00:23:05.056900 kernel: acpiphp: Slot [18] registered Jul 2 00:23:05.056910 kernel: acpiphp: Slot [19] registered Jul 2 00:23:05.056921 kernel: acpiphp: Slot [20] registered Jul 2 00:23:05.056932 kernel: acpiphp: Slot [21] registered Jul 2 00:23:05.056942 kernel: acpiphp: Slot [22] registered Jul 2 00:23:05.056952 kernel: acpiphp: Slot [23] registered Jul 2 00:23:05.056962 kernel: acpiphp: Slot [24] registered Jul 2 00:23:05.056972 kernel: acpiphp: Slot [25] registered Jul 2 00:23:05.056982 kernel: acpiphp: Slot [26] registered Jul 2 00:23:05.056991 kernel: acpiphp: Slot [27] registered Jul 2 00:23:05.057001 kernel: acpiphp: Slot [28] registered Jul 2 00:23:05.057011 kernel: acpiphp: Slot [29] registered Jul 2 00:23:05.057023 kernel: acpiphp: Slot [30] registered Jul 2 00:23:05.057033 kernel: acpiphp: Slot [31] registered Jul 2 00:23:05.057043 kernel: PCI host bridge to bus 0000:00 Jul 2 00:23:05.057170 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:23:05.057265 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:23:05.057353 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:23:05.057440 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:23:05.057533 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:23:05.057619 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:23:05.057732 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:23:05.057842 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:23:05.057948 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:23:05.058046 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 2 00:23:05.060189 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:23:05.060295 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:23:05.060393 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:23:05.060514 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:23:05.060707 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:23:05.060824 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:23:05.060941 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:23:05.062173 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 00:23:05.062287 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 00:23:05.062383 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 00:23:05.062480 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 2 00:23:05.062575 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 2 00:23:05.062672 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:23:05.062792 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:23:05.062916 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 2 00:23:05.063034 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 2 00:23:05.063198 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 00:23:05.063355 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 2 00:23:05.063463 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:23:05.063563 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:23:05.063660 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 2 00:23:05.063762 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 00:23:05.063871 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 00:23:05.063983 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 2 00:23:05.067205 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 00:23:05.067309 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:23:05.067405 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 2 00:23:05.067493 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 00:23:05.067511 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:23:05.067520 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:23:05.067529 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:23:05.067538 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:23:05.067547 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:23:05.067555 kernel: iommu: Default domain type: Translated Jul 2 00:23:05.067564 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:23:05.067573 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:23:05.067581 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:23:05.067592 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:23:05.067601 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 2 00:23:05.067687 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:23:05.067774 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:23:05.067860 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:23:05.067874 kernel: vgaarb: loaded Jul 2 00:23:05.067883 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:23:05.067892 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:23:05.067904 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:23:05.067912 kernel: pnp: PnP ACPI init Jul 2 00:23:05.068000 kernel: pnp 00:03: [dma 2] Jul 2 00:23:05.068013 kernel: pnp: PnP ACPI: found 5 devices Jul 2 00:23:05.068023 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:23:05.068032 kernel: NET: Registered PF_INET protocol family Jul 2 00:23:05.068041 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:23:05.068067 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:23:05.068076 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:23:05.068089 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:23:05.069079 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:23:05.069088 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:23:05.069097 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:23:05.069106 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:23:05.069115 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:23:05.069123 kernel: NET: Registered PF_XDP protocol family Jul 2 00:23:05.069214 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:23:05.069298 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:23:05.069374 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:23:05.069450 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:23:05.069526 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:23:05.069617 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:23:05.069708 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:23:05.069722 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:23:05.069731 kernel: Initialise system trusted keyrings Jul 2 00:23:05.069744 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:23:05.069752 kernel: Key type asymmetric registered Jul 2 00:23:05.069761 kernel: Asymmetric key parser 'x509' registered Jul 2 00:23:05.069770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:23:05.069778 kernel: io scheduler mq-deadline registered Jul 2 00:23:05.069787 kernel: io scheduler kyber registered Jul 2 00:23:05.069796 kernel: io scheduler bfq registered Jul 2 00:23:05.069804 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:23:05.069814 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 00:23:05.069825 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:23:05.069834 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:23:05.069843 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:23:05.069852 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:23:05.069861 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:23:05.069870 kernel: random: crng init done Jul 2 00:23:05.069878 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:23:05.069887 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:23:05.069895 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:23:05.069994 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 2 00:23:05.070010 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:23:05.072150 kernel: rtc_cmos 00:04: registered as rtc0 Jul 2 00:23:05.072243 kernel: rtc_cmos 00:04: setting system clock to 2024-07-02T00:23:04 UTC (1719879784) Jul 2 00:23:05.072329 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 2 00:23:05.072350 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:23:05.072365 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:23:05.072377 kernel: Segment Routing with IPv6 Jul 2 00:23:05.072397 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:23:05.072408 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:23:05.072418 kernel: Key type dns_resolver registered Jul 2 00:23:05.072428 kernel: IPI shorthand broadcast: enabled Jul 2 00:23:05.072439 kernel: sched_clock: Marking stable (976010337, 138507198)->(1118257612, -3740077) Jul 2 00:23:05.072449 kernel: registered taskstats version 1 Jul 2 00:23:05.072459 kernel: Loading compiled-in X.509 certificates Jul 2 00:23:05.072470 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:23:05.072480 kernel: Key type .fscrypt registered Jul 2 00:23:05.072492 kernel: Key type fscrypt-provisioning registered Jul 2 00:23:05.072503 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:23:05.072513 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:23:05.072524 kernel: ima: No architecture policies found Jul 2 00:23:05.072534 kernel: clk: Disabling unused clocks Jul 2 00:23:05.072544 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:23:05.072554 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:23:05.072565 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:23:05.072575 kernel: Run /init as init process Jul 2 00:23:05.072587 kernel: with arguments: Jul 2 00:23:05.072597 kernel: /init Jul 2 00:23:05.072607 kernel: with environment: Jul 2 00:23:05.072617 kernel: HOME=/ Jul 2 00:23:05.072627 kernel: TERM=linux Jul 2 00:23:05.072638 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:23:05.072651 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:05.072666 systemd[1]: Detected virtualization kvm. Jul 2 00:23:05.072677 systemd[1]: Detected architecture x86-64. Jul 2 00:23:05.072688 systemd[1]: Running in initrd. Jul 2 00:23:05.072698 systemd[1]: No hostname configured, using default hostname. Jul 2 00:23:05.072709 systemd[1]: Hostname set to . Jul 2 00:23:05.072721 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:23:05.072732 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:23:05.072743 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:05.072756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:05.072768 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:23:05.072779 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:05.072791 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:23:05.072802 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:23:05.072814 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:23:05.072826 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:23:05.072839 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:05.072850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:05.072861 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:05.072872 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:05.072894 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:05.072907 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:05.072921 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:05.072932 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:05.072943 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:23:05.072955 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:23:05.072967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:05.072978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:05.072990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:05.073001 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:05.073013 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:23:05.073026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:05.073038 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:23:05.073069 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:23:05.073082 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:05.073093 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:05.073105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:05.073116 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:05.073128 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:05.073142 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:23:05.073173 systemd-journald[184]: Collecting audit messages is disabled. Jul 2 00:23:05.073203 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:23:05.073216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:23:05.073228 systemd-journald[184]: Journal started Jul 2 00:23:05.073253 systemd-journald[184]: Runtime Journal (/run/log/journal/a197d1f46e8246e6a6e6370a53924024) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:23:05.057099 systemd-modules-load[185]: Inserted module 'overlay' Jul 2 00:23:05.115600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:23:05.115630 kernel: Bridge firewalling registered Jul 2 00:23:05.115643 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:05.096570 systemd-modules-load[185]: Inserted module 'br_netfilter' Jul 2 00:23:05.118270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:05.118994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:05.126202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:05.128169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:05.145884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:05.151234 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:05.156258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:05.160237 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:05.169393 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:05.170277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:05.176167 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:23:05.180196 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:05.190986 dracut-cmdline[217]: dracut-dracut-053 Jul 2 00:23:05.193662 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:05.215173 systemd-resolved[221]: Positive Trust Anchors: Jul 2 00:23:05.215189 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:05.215229 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:05.218101 systemd-resolved[221]: Defaulting to hostname 'linux'. Jul 2 00:23:05.219120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:05.220881 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:05.276150 kernel: SCSI subsystem initialized Jul 2 00:23:05.289078 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:23:05.304117 kernel: iscsi: registered transport (tcp) Jul 2 00:23:05.333147 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:23:05.333283 kernel: QLogic iSCSI HBA Driver Jul 2 00:23:05.393394 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:05.401186 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:23:05.450208 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:23:05.450338 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:23:05.451350 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:23:05.525238 kernel: raid6: sse2x4 gen() 5589 MB/s Jul 2 00:23:05.542151 kernel: raid6: sse2x2 gen() 14786 MB/s Jul 2 00:23:05.559257 kernel: raid6: sse2x1 gen() 9984 MB/s Jul 2 00:23:05.559328 kernel: raid6: using algorithm sse2x2 gen() 14786 MB/s Jul 2 00:23:05.577320 kernel: raid6: .... xor() 9386 MB/s, rmw enabled Jul 2 00:23:05.577386 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 00:23:05.606496 kernel: xor: measuring software checksum speed Jul 2 00:23:05.606560 kernel: prefetch64-sse : 18634 MB/sec Jul 2 00:23:05.607104 kernel: generic_sse : 16934 MB/sec Jul 2 00:23:05.608345 kernel: xor: using function: prefetch64-sse (18634 MB/sec) Jul 2 00:23:05.818129 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:23:05.836892 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:05.846328 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:05.891660 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jul 2 00:23:05.902595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:05.917529 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:23:05.949763 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jul 2 00:23:05.998110 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:06.006366 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:06.066361 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:06.072246 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:23:06.090745 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:06.092626 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:06.093958 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:06.095148 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:06.103209 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:23:06.135702 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:06.153084 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 2 00:23:06.188486 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jul 2 00:23:06.188605 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:23:06.188619 kernel: GPT:17805311 != 41943039 Jul 2 00:23:06.188630 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:23:06.188641 kernel: GPT:17805311 != 41943039 Jul 2 00:23:06.188652 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:23:06.188666 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:23:06.194412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:06.194940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:06.197178 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:06.197677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:06.197809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:06.198318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:06.211653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:06.217852 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jul 2 00:23:06.230075 kernel: libata version 3.00 loaded. Jul 2 00:23:06.247078 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (468) Jul 2 00:23:06.252274 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:23:06.264216 kernel: scsi host0: ata_piix Jul 2 00:23:06.264354 kernel: scsi host1: ata_piix Jul 2 00:23:06.264465 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 2 00:23:06.264479 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 2 00:23:06.251146 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:23:06.279973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:06.290313 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:23:06.294889 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:23:06.295476 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:23:06.301928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:23:06.317266 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:23:06.323544 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:06.328693 disk-uuid[501]: Primary Header is updated. Jul 2 00:23:06.328693 disk-uuid[501]: Secondary Entries is updated. Jul 2 00:23:06.328693 disk-uuid[501]: Secondary Header is updated. Jul 2 00:23:06.340111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:23:06.353088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:23:06.355875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:06.366347 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:23:07.376146 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:23:07.377158 disk-uuid[502]: The operation has completed successfully. Jul 2 00:23:07.457778 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:23:07.457923 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:23:07.481196 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:23:07.490522 sh[526]: Success Jul 2 00:23:07.513082 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 2 00:23:07.593872 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:23:07.618347 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:23:07.622198 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:23:07.664538 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:23:07.664612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:07.676745 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:23:07.680160 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:23:07.682135 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:23:07.696812 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:23:07.697956 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:23:07.710202 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:23:07.715240 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:23:07.735443 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:07.735518 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:07.739328 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:23:07.750083 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:23:07.769909 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:23:07.774090 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:07.784938 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:23:07.794411 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:23:07.864382 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:07.873300 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:07.899376 systemd-networkd[709]: lo: Link UP Jul 2 00:23:07.899386 systemd-networkd[709]: lo: Gained carrier Jul 2 00:23:07.900639 systemd-networkd[709]: Enumeration completed Jul 2 00:23:07.900741 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:07.901164 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:07.901168 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:07.903123 systemd-networkd[709]: eth0: Link UP Jul 2 00:23:07.903127 systemd-networkd[709]: eth0: Gained carrier Jul 2 00:23:07.903134 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:07.908530 systemd[1]: Reached target network.target - Network. Jul 2 00:23:07.922556 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.162/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 00:23:07.944488 ignition[635]: Ignition 2.18.0 Jul 2 00:23:07.944507 ignition[635]: Stage: fetch-offline Jul 2 00:23:07.946808 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:07.944561 ignition[635]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:07.944573 ignition[635]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:23:07.944776 ignition[635]: parsed url from cmdline: "" Jul 2 00:23:07.944781 ignition[635]: no config URL provided Jul 2 00:23:07.944788 ignition[635]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:07.944797 ignition[635]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:07.944803 ignition[635]: failed to fetch config: resource requires networking Jul 2 00:23:07.945150 ignition[635]: Ignition finished successfully Jul 2 00:23:07.954329 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:23:07.967276 ignition[718]: Ignition 2.18.0 Jul 2 00:23:07.967291 ignition[718]: Stage: fetch Jul 2 00:23:07.967574 ignition[718]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:07.967586 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:23:07.967694 ignition[718]: parsed url from cmdline: "" Jul 2 00:23:07.967698 ignition[718]: no config URL provided Jul 2 00:23:07.967705 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:07.967714 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:07.967807 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 00:23:07.968280 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 00:23:07.968310 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 00:23:08.170407 ignition[718]: GET result: OK Jul 2 00:23:08.170589 ignition[718]: parsing config with SHA512: 2f2c2fe2870e6a211b24472a3300e6492f4be4de18b3a670c95bf612ac560c4bd9880c3d719f038121ad007126e6bfa2d5230041ecc0447c10f8638e5628d3d0 Jul 2 00:23:08.180476 unknown[718]: fetched base config from "system" Jul 2 00:23:08.180503 unknown[718]: fetched base config from "system" Jul 2 00:23:08.181457 ignition[718]: fetch: fetch complete Jul 2 00:23:08.180518 unknown[718]: fetched user config from "openstack" Jul 2 00:23:08.181470 ignition[718]: fetch: fetch passed Jul 2 00:23:08.185233 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:23:08.181557 ignition[718]: Ignition finished successfully Jul 2 00:23:08.199530 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:23:08.231905 ignition[725]: Ignition 2.18.0 Jul 2 00:23:08.231934 ignition[725]: Stage: kargs Jul 2 00:23:08.232378 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:08.232406 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:23:08.234834 ignition[725]: kargs: kargs passed Jul 2 00:23:08.237104 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:23:08.234937 ignition[725]: Ignition finished successfully Jul 2 00:23:08.251465 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:23:08.275815 ignition[733]: Ignition 2.18.0 Jul 2 00:23:08.275830 ignition[733]: Stage: disks Jul 2 00:23:08.276036 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:08.278132 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:23:08.276079 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:23:08.279733 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:08.277103 ignition[733]: disks: disks passed Jul 2 00:23:08.281114 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:23:08.277157 ignition[733]: Ignition finished successfully Jul 2 00:23:08.282915 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:08.284983 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:08.287092 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:08.294223 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:23:08.319746 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 2 00:23:08.328389 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:23:08.338268 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:23:08.506112 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:23:08.506325 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:23:08.507403 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:08.515288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:08.519307 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:23:08.522700 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:23:08.526029 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 2 00:23:08.530851 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (750) Jul 2 00:23:08.530366 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:23:08.543574 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:08.543617 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:08.543647 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:23:08.543675 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:23:08.530394 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:08.532138 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:23:08.547352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:08.556232 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:23:08.698334 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:23:08.704277 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:23:08.710337 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:23:08.715434 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:23:08.825510 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:08.835148 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:23:08.838191 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:23:08.849836 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:23:08.853107 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:08.876593 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:23:08.883333 ignition[867]: INFO : Ignition 2.18.0 Jul 2 00:23:08.883333 ignition[867]: INFO : Stage: mount Jul 2 00:23:08.884475 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:08.884475 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:23:08.887470 ignition[867]: INFO : mount: mount passed Jul 2 00:23:08.887470 ignition[867]: INFO : Ignition finished successfully Jul 2 00:23:08.886370 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:23:09.759747 systemd-networkd[709]: eth0: Gained IPv6LL Jul 2 00:23:15.776338 coreos-metadata[752]: Jul 02 00:23:15.776 WARN failed to locate config-drive, using the metadata service API instead Jul 2 00:23:15.816426 coreos-metadata[752]: Jul 02 00:23:15.816 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 00:23:15.828873 coreos-metadata[752]: Jul 02 00:23:15.828 INFO Fetch successful Jul 2 00:23:15.828873 coreos-metadata[752]: Jul 02 00:23:15.828 INFO wrote hostname ci-3975-1-1-5-578c77618a.novalocal to /sysroot/etc/hostname Jul 2 00:23:15.833892 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 00:23:15.836120 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 2 00:23:15.845225 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:23:15.885393 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:15.914907 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (884) Jul 2 00:23:15.915009 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:15.917225 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:15.920203 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:23:15.929170 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:23:15.934344 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:15.977153 ignition[902]: INFO : Ignition 2.18.0 Jul 2 00:23:15.978036 ignition[902]: INFO : Stage: files Jul 2 00:23:15.980025 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:15.980025 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:23:15.980025 ignition[902]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:23:15.982104 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:23:15.982104 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:23:15.987669 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:23:15.988439 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:23:15.989159 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:23:15.988730 unknown[902]: wrote ssh authorized keys file for user: core Jul 2 00:23:15.992076 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:23:15.993031 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:23:16.077848 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:23:16.411141 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:23:16.411141 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:23:16.411141 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:23:16.949895 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:23:17.407002 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:23:17.407002 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:17.407002 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:17.407002 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:23:17.416041 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 00:23:17.904023 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:23:19.604358 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:23:19.604358 ignition[902]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 00:23:19.616973 ignition[902]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:19.619395 ignition[902]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:19.619395 ignition[902]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 00:23:19.619395 ignition[902]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:19.619395 ignition[902]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:19.619395 ignition[902]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:19.619395 ignition[902]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:19.619395 ignition[902]: INFO : files: files passed Jul 2 00:23:19.619395 ignition[902]: INFO : Ignition finished successfully Jul 2 00:23:19.620766 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:23:19.632245 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:23:19.638235 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:23:19.647262 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:23:19.647406 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:23:19.662077 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:19.662077 initrd-setup-root-after-ignition[931]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:19.664951 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:19.666848 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:19.668120 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:23:19.674405 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:23:19.709033 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:23:19.710567 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:23:19.712607 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:23:19.714613 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:23:19.716674 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:23:19.725437 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:23:19.749307 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:19.754297 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:23:19.770107 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:19.771497 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:19.772910 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:23:19.773472 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:23:19.773606 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:19.775019 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:23:19.775738 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:23:19.776930 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:23:19.778089 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:19.779190 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:19.780345 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:23:19.781518 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:19.782638 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:23:19.783768 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:23:19.784935 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:23:19.786012 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:23:19.786168 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:19.787529 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:19.788342 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:19.789374 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:23:19.789496 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:19.790561 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:23:19.790687 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:19.792218 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:23:19.792348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:19.793493 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:23:19.793607 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:23:19.811615 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:23:19.812223 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:23:19.812421 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:19.816460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:23:19.817110 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:23:19.817333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:19.818625 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:23:19.819269 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:19.825883 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:23:19.825987 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:23:19.832072 ignition[955]: INFO : Ignition 2.18.0 Jul 2 00:23:19.832760 ignition[955]: INFO : Stage: umount Jul 2 00:23:19.833222 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:19.833222 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 00:23:19.836392 ignition[955]: INFO : umount: umount passed Jul 2 00:23:19.836392 ignition[955]: INFO : Ignition finished successfully Jul 2 00:23:19.838186 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:23:19.838593 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:23:19.839887 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:23:19.841043 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:23:19.842594 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:23:19.842667 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:23:19.844966 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:23:19.845031 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:23:19.845708 systemd[1]: Stopped target network.target - Network. Jul 2 00:23:19.846358 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:23:19.846422 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:19.847135 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:23:19.848265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:23:19.852202 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:19.854136 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:23:19.855148 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:23:19.855684 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:23:19.855760 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:19.856301 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:23:19.856335 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:19.857267 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:23:19.857311 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:23:19.858203 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:23:19.858241 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:19.859281 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:23:19.860410 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:23:19.862333 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:23:19.863031 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:23:19.863152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:23:19.863186 systemd-networkd[709]: eth0: DHCPv6 lease lost Jul 2 00:23:19.865221 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:23:19.865290 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:19.867958 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:23:19.868073 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:23:19.870307 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:23:19.870420 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:23:19.871899 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:23:19.871947 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:19.878195 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:23:19.878714 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:23:19.878775 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:19.879426 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:23:19.879472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:19.880080 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:23:19.880126 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:19.881118 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:23:19.881161 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:19.888868 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:19.902496 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:23:19.902654 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:19.904028 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:23:19.904205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:23:19.905390 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:23:19.905451 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:19.906516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:23:19.906548 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:19.907660 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:23:19.907711 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:19.909238 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:23:19.909279 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:19.910332 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:19.910374 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:19.920441 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:23:19.921030 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:23:19.921110 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:19.921727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:19.921790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:19.926734 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:23:19.926834 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:23:19.928570 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:23:19.936608 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:23:19.945762 systemd[1]: Switching root. Jul 2 00:23:19.974854 systemd-journald[184]: Journal stopped Jul 2 00:23:22.074702 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 2 00:23:22.074751 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:23:22.074766 kernel: SELinux: policy capability open_perms=1 Jul 2 00:23:22.074778 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:23:22.074790 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:23:22.074806 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:23:22.074818 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:23:22.074834 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:23:22.074850 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:23:22.074862 kernel: audit: type=1403 audit(1719879800.968:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:23:22.074877 systemd[1]: Successfully loaded SELinux policy in 71.266ms. Jul 2 00:23:22.074899 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.509ms. Jul 2 00:23:22.074913 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:22.074945 systemd[1]: Detected virtualization kvm. Jul 2 00:23:22.074966 systemd[1]: Detected architecture x86-64. Jul 2 00:23:22.074979 systemd[1]: Detected first boot. Jul 2 00:23:22.074995 systemd[1]: Hostname set to . Jul 2 00:23:22.075008 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:23:22.075021 zram_generator::config[998]: No configuration found. Jul 2 00:23:22.075035 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:23:22.075064 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:23:22.075079 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:23:22.075092 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:22.075107 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:23:22.075123 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:23:22.075136 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:23:22.075149 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:23:22.075161 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:23:22.075174 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:23:22.075187 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:23:22.075200 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:23:22.075213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:22.075226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:22.075243 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:23:22.075256 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:23:22.075270 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:23:22.075282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:22.075295 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:23:22.075308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:22.075320 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:23:22.075335 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:23:22.075348 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:22.075361 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:23:22.075374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:22.075388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:22.075401 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:22.075413 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:22.075426 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:23:22.075441 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:23:22.075454 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:22.075467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:22.075480 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:22.075492 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:23:22.075505 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:23:22.075517 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:23:22.075530 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:23:22.075543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:22.075558 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:23:22.075588 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:23:22.075603 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:23:22.075617 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:23:22.075630 systemd[1]: Reached target machines.target - Containers. Jul 2 00:23:22.075643 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:23:22.075656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:22.075669 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:22.075683 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:23:22.075699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:22.075711 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:22.075724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:22.075737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:23:22.075749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:22.075762 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:23:22.075774 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:23:22.075787 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:23:22.075801 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:23:22.075814 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:23:22.075826 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:22.075839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:22.075851 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:23:22.075864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:23:22.075876 kernel: loop: module loaded Jul 2 00:23:22.075888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:22.075901 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:23:22.075917 systemd[1]: Stopped verity-setup.service. Jul 2 00:23:22.075930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:22.075942 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:23:22.075953 kernel: fuse: init (API version 7.39) Jul 2 00:23:22.075964 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:23:22.075977 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:23:22.075990 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:23:22.076003 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:23:22.076015 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:23:22.076028 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:22.076041 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:23:22.076087 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:23:22.076101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:22.076115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:22.076127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:22.076139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:22.076153 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:23:22.076165 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:23:22.076177 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:22.076191 kernel: ACPI: bus type drm_connector registered Jul 2 00:23:22.076203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:22.076214 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:22.076226 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:22.076261 systemd-journald[1093]: Collecting audit messages is disabled. Jul 2 00:23:22.076285 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:23:22.076298 systemd-journald[1093]: Journal started Jul 2 00:23:22.076326 systemd-journald[1093]: Runtime Journal (/run/log/journal/a197d1f46e8246e6a6e6370a53924024) is 4.9M, max 39.3M, 34.4M free. Jul 2 00:23:21.684203 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:23:21.705203 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:23:21.705604 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:23:22.079099 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:22.079615 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:22.080500 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:23:22.081351 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:23:22.095161 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:23:22.103140 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:23:22.113148 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:23:22.113819 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:23:22.113856 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:22.116784 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:23:22.121230 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:23:22.127320 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:23:22.128005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:22.132210 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:23:22.136491 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:23:22.139408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:22.141116 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:23:22.141686 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:22.150692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:22.154238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:23:22.157179 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:23:22.160653 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:23:22.161333 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:23:22.162743 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:23:22.167120 systemd-journald[1093]: Time spent on flushing to /var/log/journal/a197d1f46e8246e6a6e6370a53924024 is 72.419ms for 940 entries. Jul 2 00:23:22.167120 systemd-journald[1093]: System Journal (/var/log/journal/a197d1f46e8246e6a6e6370a53924024) is 8.0M, max 584.8M, 576.8M free. Jul 2 00:23:22.269490 systemd-journald[1093]: Received client request to flush runtime journal. Jul 2 00:23:22.269542 kernel: loop0: detected capacity change from 0 to 8 Jul 2 00:23:22.269574 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:23:22.269779 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:23:22.191150 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:22.192538 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:23:22.194687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:23:22.207411 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:23:22.214334 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:23:22.264137 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:23:22.272903 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:23:22.279834 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:22.299082 kernel: loop1: detected capacity change from 0 to 80568 Jul 2 00:23:22.340143 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:23:22.342981 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:23:22.346467 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:23:22.358281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:22.387304 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jul 2 00:23:22.387717 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jul 2 00:23:22.394253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:22.395072 kernel: loop2: detected capacity change from 0 to 139904 Jul 2 00:23:22.498819 kernel: loop3: detected capacity change from 0 to 210664 Jul 2 00:23:22.593691 kernel: loop4: detected capacity change from 0 to 8 Jul 2 00:23:22.599595 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:23:22.644138 kernel: loop6: detected capacity change from 0 to 139904 Jul 2 00:23:22.732108 kernel: loop7: detected capacity change from 0 to 210664 Jul 2 00:23:22.771602 (sd-merge)[1156]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 2 00:23:22.772160 (sd-merge)[1156]: Merged extensions into '/usr'. Jul 2 00:23:22.784918 systemd[1]: Reloading requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:23:22.784935 systemd[1]: Reloading... Jul 2 00:23:22.895078 zram_generator::config[1178]: No configuration found. Jul 2 00:23:23.118672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:23.192086 systemd[1]: Reloading finished in 406 ms. Jul 2 00:23:23.232241 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:23:23.234846 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:23:23.254473 systemd[1]: Starting ensure-sysext.service... Jul 2 00:23:23.264396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:23.275327 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:23.280636 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:23:23.280659 systemd[1]: Reloading... Jul 2 00:23:23.307184 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:23:23.322684 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:23:23.323245 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:23:23.325344 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jul 2 00:23:23.327231 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:23:23.327663 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jul 2 00:23:23.327749 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jul 2 00:23:23.335255 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:23.335270 systemd-tmpfiles[1238]: Skipping /boot Jul 2 00:23:23.355983 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:23.356001 systemd-tmpfiles[1238]: Skipping /boot Jul 2 00:23:23.371209 zram_generator::config[1263]: No configuration found. Jul 2 00:23:23.434079 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1281) Jul 2 00:23:23.478170 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1272) Jul 2 00:23:23.554476 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:23:23.590311 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:23:23.599339 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:23:23.665627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:23.674084 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:23:23.681084 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 2 00:23:23.683108 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 2 00:23:23.689092 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:23:23.689142 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:23:23.693120 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 2 00:23:23.693167 kernel: [drm] features: -context_init Jul 2 00:23:23.694351 kernel: [drm] number of scanouts: 1 Jul 2 00:23:23.694389 kernel: [drm] number of cap sets: 0 Jul 2 00:23:23.698084 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 2 00:23:23.708902 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 2 00:23:23.709000 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:23:23.715090 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 2 00:23:23.754219 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:23:23.754634 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:23:23.757711 systemd[1]: Reloading finished in 476 ms. Jul 2 00:23:23.776668 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:23.777440 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:23:23.789825 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:23.825573 systemd[1]: Finished ensure-sysext.service. Jul 2 00:23:23.844997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:23.852513 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:23.866538 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:23:23.867164 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:23.878747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:23.885422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:23.896943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:23.904316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:23.904597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:23.907277 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:23:23.911521 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:23:23.921674 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:23.931287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:23.941997 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:23:23.946236 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:23:23.950385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:23.950533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:23.954147 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:23:23.956396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:23.956734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:23.958449 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:23.958639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:23.959885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:23.960042 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:23.966272 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:23:23.972808 augenrules[1376]: No rules Jul 2 00:23:23.975309 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:23.986146 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:23.987371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:24.005540 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:23:24.006308 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:24.006537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:24.010201 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:23:24.016264 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:23:24.023975 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:23:24.035168 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:23:24.041835 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:24.076229 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:23:24.079250 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:23:24.084287 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:24.096281 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:23:24.102341 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:24.109320 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:23:24.125182 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:23:24.131595 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:23:24.133507 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:23:24.153705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:24.214312 systemd-resolved[1372]: Positive Trust Anchors: Jul 2 00:23:24.214693 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:24.214790 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:24.220415 systemd-resolved[1372]: Using system hostname 'ci-3975-1-1-5-578c77618a.novalocal'. Jul 2 00:23:24.222594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:24.223523 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:24.235209 systemd-networkd[1370]: lo: Link UP Jul 2 00:23:24.235480 systemd-networkd[1370]: lo: Gained carrier Jul 2 00:23:24.236862 systemd-networkd[1370]: Enumeration completed Jul 2 00:23:24.237476 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:24.238111 systemd[1]: Reached target network.target - Network. Jul 2 00:23:24.242266 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:24.242396 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:24.243669 systemd-networkd[1370]: eth0: Link UP Jul 2 00:23:24.244133 systemd-networkd[1370]: eth0: Gained carrier Jul 2 00:23:24.244154 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:24.253001 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:23:24.254001 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:23:24.254678 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:24.257123 systemd-networkd[1370]: eth0: DHCPv4 address 172.24.4.162/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 00:23:24.257373 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:23:24.258226 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:23:24.258718 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Jul 2 00:23:24.261685 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:23:24.262474 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:23:24.262517 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:24.264631 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:23:24.266768 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:23:24.268861 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:23:24.270845 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:24.273362 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:23:24.277165 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:23:24.282756 systemd-timesyncd[1373]: Contacted time server 95.81.173.74:123 (0.flatcar.pool.ntp.org). Jul 2 00:23:24.282813 systemd-timesyncd[1373]: Initial clock synchronization to Tue 2024-07-02 00:23:24.517197 UTC. Jul 2 00:23:24.287873 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:23:24.292212 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:23:24.297798 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:24.301321 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:24.302013 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:24.302076 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:24.317181 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:23:24.324216 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:23:24.330276 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:23:24.340326 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:23:24.349637 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:23:24.352974 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:23:24.356222 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:23:24.362030 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:23:24.373601 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:23:24.377298 jq[1421]: false Jul 2 00:23:24.384351 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:23:24.399156 extend-filesystems[1422]: Found loop4 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found loop5 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found loop6 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found loop7 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found vda Jul 2 00:23:24.399156 extend-filesystems[1422]: Found vda1 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found vda2 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found vda3 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found usr Jul 2 00:23:24.399156 extend-filesystems[1422]: Found vda4 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found vda6 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found vda7 Jul 2 00:23:24.399156 extend-filesystems[1422]: Found vda9 Jul 2 00:23:24.399156 extend-filesystems[1422]: Checking size of /dev/vda9 Jul 2 00:23:24.393517 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:23:24.399141 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:23:24.400025 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:23:24.414108 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:23:24.478182 jq[1433]: true Jul 2 00:23:24.417542 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:23:24.445381 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:23:24.445578 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:23:24.450157 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:23:24.450318 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:23:24.502987 update_engine[1432]: I0702 00:23:24.498653 1432 main.cc:92] Flatcar Update Engine starting Jul 2 00:23:24.513045 jq[1438]: true Jul 2 00:23:24.527106 extend-filesystems[1422]: Resized partition /dev/vda9 Jul 2 00:23:24.558674 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jul 2 00:23:24.540990 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:23:24.560124 extend-filesystems[1457]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:23:24.541126 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:23:24.571880 tar[1437]: linux-amd64/helm Jul 2 00:23:24.541349 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:23:24.591780 dbus-daemon[1418]: [system] SELinux support is enabled Jul 2 00:23:24.592078 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:23:24.597179 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:23:24.597217 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:23:24.602585 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:23:24.602633 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:23:24.610644 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1273) Jul 2 00:23:24.613156 systemd-logind[1429]: New seat seat0. Jul 2 00:23:24.617022 systemd-logind[1429]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:23:24.617093 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:23:24.618089 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jul 2 00:23:24.618609 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:23:24.630172 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:23:24.740772 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:24.740895 update_engine[1432]: I0702 00:23:24.629674 1432 update_check_scheduler.cc:74] Next update check in 8m34s Jul 2 00:23:24.650435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:23:24.742094 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:23:24.753419 systemd[1]: Starting sshkeys.service... Jul 2 00:23:24.782902 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:23:24.782902 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 3 Jul 2 00:23:24.782902 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jul 2 00:23:24.792684 extend-filesystems[1422]: Resized filesystem in /dev/vda9 Jul 2 00:23:24.797453 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:23:24.800687 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:23:24.828270 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:23:24.839539 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:23:25.048227 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:23:25.109737 containerd[1446]: time="2024-07-02T00:23:25.109638563Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:23:25.165303 containerd[1446]: time="2024-07-02T00:23:25.165194302Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:23:25.165405 containerd[1446]: time="2024-07-02T00:23:25.165318444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.168114 containerd[1446]: time="2024-07-02T00:23:25.167279352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:25.168114 containerd[1446]: time="2024-07-02T00:23:25.167322056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.168114 containerd[1446]: time="2024-07-02T00:23:25.167670682Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:25.168114 containerd[1446]: time="2024-07-02T00:23:25.167691668Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:23:25.168114 containerd[1446]: time="2024-07-02T00:23:25.167855492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.168114 containerd[1446]: time="2024-07-02T00:23:25.167935456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:25.168114 containerd[1446]: time="2024-07-02T00:23:25.167953627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.168330 containerd[1446]: time="2024-07-02T00:23:25.168122690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.170067 containerd[1446]: time="2024-07-02T00:23:25.168387001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.170067 containerd[1446]: time="2024-07-02T00:23:25.168418702Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:23:25.170067 containerd[1446]: time="2024-07-02T00:23:25.168432293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.170067 containerd[1446]: time="2024-07-02T00:23:25.168551497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:25.170067 containerd[1446]: time="2024-07-02T00:23:25.168570420Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:23:25.170067 containerd[1446]: time="2024-07-02T00:23:25.168640225Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:23:25.170067 containerd[1446]: time="2024-07-02T00:23:25.168658808Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:23:25.179875 containerd[1446]: time="2024-07-02T00:23:25.179823614Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:23:25.179875 containerd[1446]: time="2024-07-02T00:23:25.179870165Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:23:25.179875 containerd[1446]: time="2024-07-02T00:23:25.179887594Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:23:25.180125 containerd[1446]: time="2024-07-02T00:23:25.179927008Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:23:25.180125 containerd[1446]: time="2024-07-02T00:23:25.179946468Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:23:25.180125 containerd[1446]: time="2024-07-02T00:23:25.179959760Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:23:25.180125 containerd[1446]: time="2024-07-02T00:23:25.179974879Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:23:25.180235 containerd[1446]: time="2024-07-02T00:23:25.180131898Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:23:25.180235 containerd[1446]: time="2024-07-02T00:23:25.180154163Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:23:25.180235 containerd[1446]: time="2024-07-02T00:23:25.180170043Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:23:25.180235 containerd[1446]: time="2024-07-02T00:23:25.180186874Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:23:25.180235 containerd[1446]: time="2024-07-02T00:23:25.180204179Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.180235 containerd[1446]: time="2024-07-02T00:23:25.180226494Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.180387 containerd[1446]: time="2024-07-02T00:23:25.180247439Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.180387 containerd[1446]: time="2024-07-02T00:23:25.180264238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.180387 containerd[1446]: time="2024-07-02T00:23:25.180281564Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.180387 containerd[1446]: time="2024-07-02T00:23:25.180299157Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.180387 containerd[1446]: time="2024-07-02T00:23:25.180315842Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.180387 containerd[1446]: time="2024-07-02T00:23:25.180330466Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:23:25.180561 containerd[1446]: time="2024-07-02T00:23:25.180469633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.180793962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.180843297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.180861571Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.180891416Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181008752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181031058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181046538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181202979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181222397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181241351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181259502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181275372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181291450Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:23:25.183119 containerd[1446]: time="2024-07-02T00:23:25.181443405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183678 containerd[1446]: time="2024-07-02T00:23:25.181464102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183678 containerd[1446]: time="2024-07-02T00:23:25.181478941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183678 containerd[1446]: time="2024-07-02T00:23:25.181494658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183678 containerd[1446]: time="2024-07-02T00:23:25.181509271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183678 containerd[1446]: time="2024-07-02T00:23:25.181525018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183678 containerd[1446]: time="2024-07-02T00:23:25.181539229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.183678 containerd[1446]: time="2024-07-02T00:23:25.181559998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.184248 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.181881584Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.181961641Z" level=info msg="Connect containerd service" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.181988587Z" level=info msg="using legacy CRI server" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.181999096Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.182130395Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183038743Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183167702Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183194082Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183272931Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183291288Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183236332Z" level=info msg="Start subscribing containerd event" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183395980Z" level=info msg="Start recovering state" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183459464Z" level=info msg="Start event monitor" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183477635Z" level=info msg="Start snapshots syncer" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183487504Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183496528Z" level=info msg="Start streaming server" Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.183991705Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:23:25.186458 containerd[1446]: time="2024-07-02T00:23:25.184048413Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:23:25.195230 containerd[1446]: time="2024-07-02T00:23:25.194160165Z" level=info msg="containerd successfully booted in 0.086186s" Jul 2 00:23:25.375387 systemd-networkd[1370]: eth0: Gained IPv6LL Jul 2 00:23:25.381434 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:23:25.388171 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:23:25.400980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:25.410002 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:23:25.478634 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:23:25.485121 tar[1437]: linux-amd64/LICENSE Jul 2 00:23:25.485219 tar[1437]: linux-amd64/README.md Jul 2 00:23:25.501117 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:23:25.521982 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:23:25.549622 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:23:25.561526 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:23:25.568739 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:23:25.569110 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:23:25.584103 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:23:25.598370 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:23:25.610578 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:23:25.620753 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:23:25.623459 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:23:27.211802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:27.226834 (kubelet)[1533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:27.255494 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:23:27.270040 systemd[1]: Started sshd@0-172.24.4.162:22-172.24.4.1:55668.service - OpenSSH per-connection server daemon (172.24.4.1:55668). Jul 2 00:23:28.488791 sshd[1535]: Accepted publickey for core from 172.24.4.1 port 55668 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:28.515736 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:28.551719 systemd-logind[1429]: New session 1 of user core. Jul 2 00:23:28.556038 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:23:28.571014 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:23:28.616047 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:23:28.630114 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:23:28.656944 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:28.790953 systemd[1546]: Queued start job for default target default.target. Jul 2 00:23:28.795411 systemd[1546]: Created slice app.slice - User Application Slice. Jul 2 00:23:28.795444 systemd[1546]: Reached target paths.target - Paths. Jul 2 00:23:28.795460 systemd[1546]: Reached target timers.target - Timers. Jul 2 00:23:28.800236 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:23:28.811521 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:23:28.811767 systemd[1546]: Reached target sockets.target - Sockets. Jul 2 00:23:28.811878 systemd[1546]: Reached target basic.target - Basic System. Jul 2 00:23:28.811995 systemd[1546]: Reached target default.target - Main User Target. Jul 2 00:23:28.812026 systemd[1546]: Startup finished in 148ms. Jul 2 00:23:28.812289 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:23:28.821493 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:23:29.178756 systemd[1]: Started sshd@1-172.24.4.162:22-172.24.4.1:55682.service - OpenSSH per-connection server daemon (172.24.4.1:55682). Jul 2 00:23:29.298971 kubelet[1533]: E0702 00:23:29.298827 1533 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:29.302413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:29.303357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:29.304625 systemd[1]: kubelet.service: Consumed 2.013s CPU time. Jul 2 00:23:30.668213 login[1525]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:23:30.679753 systemd-logind[1429]: New session 2 of user core. Jul 2 00:23:30.681637 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:23:30.694554 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:23:30.703027 systemd-logind[1429]: New session 3 of user core. Jul 2 00:23:30.721525 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:23:31.135916 sshd[1558]: Accepted publickey for core from 172.24.4.1 port 55682 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:31.138701 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:31.150170 systemd-logind[1429]: New session 4 of user core. Jul 2 00:23:31.157535 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:23:31.430623 coreos-metadata[1417]: Jul 02 00:23:31.430 WARN failed to locate config-drive, using the metadata service API instead Jul 2 00:23:31.478392 coreos-metadata[1417]: Jul 02 00:23:31.478 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 2 00:23:31.760945 sshd[1558]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:31.776000 systemd[1]: sshd@1-172.24.4.162:22-172.24.4.1:55682.service: Deactivated successfully. Jul 2 00:23:31.779519 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:23:31.783448 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:23:31.791336 systemd[1]: Started sshd@2-172.24.4.162:22-172.24.4.1:55686.service - OpenSSH per-connection server daemon (172.24.4.1:55686). Jul 2 00:23:31.794596 systemd-logind[1429]: Removed session 4. Jul 2 00:23:31.827986 coreos-metadata[1417]: Jul 02 00:23:31.827 INFO Fetch successful Jul 2 00:23:31.827986 coreos-metadata[1417]: Jul 02 00:23:31.827 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 00:23:31.845136 coreos-metadata[1417]: Jul 02 00:23:31.845 INFO Fetch successful Jul 2 00:23:31.845136 coreos-metadata[1417]: Jul 02 00:23:31.845 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 2 00:23:31.863818 coreos-metadata[1417]: Jul 02 00:23:31.863 INFO Fetch successful Jul 2 00:23:31.863818 coreos-metadata[1417]: Jul 02 00:23:31.863 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 2 00:23:31.880191 coreos-metadata[1417]: Jul 02 00:23:31.880 INFO Fetch successful Jul 2 00:23:31.880191 coreos-metadata[1417]: Jul 02 00:23:31.880 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 2 00:23:31.896575 coreos-metadata[1417]: Jul 02 00:23:31.896 INFO Fetch successful Jul 2 00:23:31.896575 coreos-metadata[1417]: Jul 02 00:23:31.896 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 2 00:23:31.912291 coreos-metadata[1417]: Jul 02 00:23:31.912 INFO Fetch successful Jul 2 00:23:31.946873 coreos-metadata[1482]: Jul 02 00:23:31.946 WARN failed to locate config-drive, using the metadata service API instead Jul 2 00:23:31.955916 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:23:31.960365 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:23:31.993060 coreos-metadata[1482]: Jul 02 00:23:31.992 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 00:23:32.007042 coreos-metadata[1482]: Jul 02 00:23:32.006 INFO Fetch successful Jul 2 00:23:32.007142 coreos-metadata[1482]: Jul 02 00:23:32.007 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:23:32.019166 coreos-metadata[1482]: Jul 02 00:23:32.019 INFO Fetch successful Jul 2 00:23:32.026140 unknown[1482]: wrote ssh authorized keys file for user: core Jul 2 00:23:32.068827 update-ssh-keys[1595]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:32.069765 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:23:32.073870 systemd[1]: Finished sshkeys.service. Jul 2 00:23:32.075901 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:23:32.079272 systemd[1]: Startup finished in 1.184s (kernel) + 16.165s (initrd) + 11.181s (userspace) = 28.531s. Jul 2 00:23:33.139396 sshd[1586]: Accepted publickey for core from 172.24.4.1 port 55686 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:33.142435 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:33.152177 systemd-logind[1429]: New session 5 of user core. Jul 2 00:23:33.163394 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:23:33.988475 sshd[1586]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:33.994609 systemd[1]: sshd@2-172.24.4.162:22-172.24.4.1:55686.service: Deactivated successfully. Jul 2 00:23:33.997697 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:23:33.999651 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:23:34.001720 systemd-logind[1429]: Removed session 5. Jul 2 00:23:39.387275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:39.401502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:39.839189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:39.854602 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:40.312959 kubelet[1611]: E0702 00:23:40.312832 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:40.321332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:40.321853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:44.068825 systemd[1]: Started sshd@3-172.24.4.162:22-172.24.4.1:48568.service - OpenSSH per-connection server daemon (172.24.4.1:48568). Jul 2 00:23:45.396378 sshd[1620]: Accepted publickey for core from 172.24.4.1 port 48568 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:45.399331 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:45.410159 systemd-logind[1429]: New session 6 of user core. Jul 2 00:23:45.420791 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:23:46.253117 sshd[1620]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:46.261622 systemd[1]: sshd@3-172.24.4.162:22-172.24.4.1:48568.service: Deactivated successfully. Jul 2 00:23:46.263699 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:23:46.266299 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:23:46.274512 systemd[1]: Started sshd@4-172.24.4.162:22-172.24.4.1:60668.service - OpenSSH per-connection server daemon (172.24.4.1:60668). Jul 2 00:23:46.278146 systemd-logind[1429]: Removed session 6. Jul 2 00:23:47.769220 sshd[1627]: Accepted publickey for core from 172.24.4.1 port 60668 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:47.772238 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:47.781823 systemd-logind[1429]: New session 7 of user core. Jul 2 00:23:47.796406 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:23:48.541963 sshd[1627]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:48.553889 systemd[1]: sshd@4-172.24.4.162:22-172.24.4.1:60668.service: Deactivated successfully. Jul 2 00:23:48.558109 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:23:48.560448 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:23:48.572694 systemd[1]: Started sshd@5-172.24.4.162:22-172.24.4.1:60684.service - OpenSSH per-connection server daemon (172.24.4.1:60684). Jul 2 00:23:48.576130 systemd-logind[1429]: Removed session 7. Jul 2 00:23:50.295343 sshd[1634]: Accepted publickey for core from 172.24.4.1 port 60684 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:50.297830 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:50.308484 systemd-logind[1429]: New session 8 of user core. Jul 2 00:23:50.317377 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:23:50.322336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:23:50.331496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:50.678915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:50.684151 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:50.856766 kubelet[1645]: E0702 00:23:50.856656 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:50.861428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:50.861775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:51.057515 sshd[1634]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:51.071493 systemd[1]: sshd@5-172.24.4.162:22-172.24.4.1:60684.service: Deactivated successfully. Jul 2 00:23:51.074942 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:23:51.079429 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:23:51.085597 systemd[1]: Started sshd@6-172.24.4.162:22-172.24.4.1:60688.service - OpenSSH per-connection server daemon (172.24.4.1:60688). Jul 2 00:23:51.089020 systemd-logind[1429]: Removed session 8. Jul 2 00:23:52.533977 sshd[1658]: Accepted publickey for core from 172.24.4.1 port 60688 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:52.536909 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:52.548102 systemd-logind[1429]: New session 9 of user core. Jul 2 00:23:52.554408 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:23:53.077323 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:23:53.078100 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:53.095751 sudo[1661]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:53.386184 sshd[1658]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:53.399605 systemd[1]: sshd@6-172.24.4.162:22-172.24.4.1:60688.service: Deactivated successfully. Jul 2 00:23:53.402489 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:23:53.404464 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:23:53.415614 systemd[1]: Started sshd@7-172.24.4.162:22-172.24.4.1:60694.service - OpenSSH per-connection server daemon (172.24.4.1:60694). Jul 2 00:23:53.418014 systemd-logind[1429]: Removed session 9. Jul 2 00:23:54.954600 sshd[1666]: Accepted publickey for core from 172.24.4.1 port 60694 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:54.957274 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:54.968670 systemd-logind[1429]: New session 10 of user core. Jul 2 00:23:54.976386 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:23:55.467215 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:23:55.467829 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:55.475779 sudo[1670]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:55.487580 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:23:55.488239 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:55.514392 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:55.532615 auditctl[1673]: No rules Jul 2 00:23:55.533751 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:23:55.534196 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:55.542766 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:55.607927 augenrules[1691]: No rules Jul 2 00:23:55.609265 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:55.612583 sudo[1669]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:55.932289 sshd[1666]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:55.946333 systemd[1]: sshd@7-172.24.4.162:22-172.24.4.1:60694.service: Deactivated successfully. Jul 2 00:23:55.950584 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:23:55.954447 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:23:55.963796 systemd[1]: Started sshd@8-172.24.4.162:22-172.24.4.1:49704.service - OpenSSH per-connection server daemon (172.24.4.1:49704). Jul 2 00:23:55.967690 systemd-logind[1429]: Removed session 10. Jul 2 00:23:57.394212 sshd[1699]: Accepted publickey for core from 172.24.4.1 port 49704 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:23:57.397152 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:57.408480 systemd-logind[1429]: New session 11 of user core. Jul 2 00:23:57.416386 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:23:57.824684 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:23:57.826844 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:58.144560 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:23:58.146158 (dockerd)[1712]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:23:58.596193 dockerd[1712]: time="2024-07-02T00:23:58.595417915Z" level=info msg="Starting up" Jul 2 00:23:58.624088 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2183575875-merged.mount: Deactivated successfully. Jul 2 00:23:58.671745 dockerd[1712]: time="2024-07-02T00:23:58.671696345Z" level=info msg="Loading containers: start." Jul 2 00:23:58.897136 kernel: Initializing XFRM netlink socket Jul 2 00:23:59.044467 systemd-networkd[1370]: docker0: Link UP Jul 2 00:23:59.064110 dockerd[1712]: time="2024-07-02T00:23:59.063670689Z" level=info msg="Loading containers: done." Jul 2 00:23:59.193934 dockerd[1712]: time="2024-07-02T00:23:59.193694239Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:23:59.194227 dockerd[1712]: time="2024-07-02T00:23:59.193956714Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:23:59.194633 dockerd[1712]: time="2024-07-02T00:23:59.194303537Z" level=info msg="Daemon has completed initialization" Jul 2 00:23:59.238623 dockerd[1712]: time="2024-07-02T00:23:59.237710357Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:23:59.238473 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:23:59.620667 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3013179676-merged.mount: Deactivated successfully. Jul 2 00:24:00.887343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:24:00.894981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:01.185575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:01.186304 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:01.439206 kubelet[1848]: E0702 00:24:01.438872 1848 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:01.443660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:01.444148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:01.824261 containerd[1446]: time="2024-07-02T00:24:01.823627705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:24:02.717210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8565181.mount: Deactivated successfully. Jul 2 00:24:05.173997 containerd[1446]: time="2024-07-02T00:24:05.173865288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:05.175827 containerd[1446]: time="2024-07-02T00:24:05.175769199Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771809" Jul 2 00:24:05.177162 containerd[1446]: time="2024-07-02T00:24:05.177108624Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:05.182074 containerd[1446]: time="2024-07-02T00:24:05.180797485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:05.182074 containerd[1446]: time="2024-07-02T00:24:05.181957661Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 3.358282027s" Jul 2 00:24:05.182074 containerd[1446]: time="2024-07-02T00:24:05.181993782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 00:24:05.206611 containerd[1446]: time="2024-07-02T00:24:05.206537733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:24:07.503886 containerd[1446]: time="2024-07-02T00:24:07.503780327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:07.505552 containerd[1446]: time="2024-07-02T00:24:07.505290598Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588682" Jul 2 00:24:07.506803 containerd[1446]: time="2024-07-02T00:24:07.506730404Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:07.510447 containerd[1446]: time="2024-07-02T00:24:07.510395911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:07.511809 containerd[1446]: time="2024-07-02T00:24:07.511689960Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.304911425s" Jul 2 00:24:07.511809 containerd[1446]: time="2024-07-02T00:24:07.511725207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 00:24:07.539144 containerd[1446]: time="2024-07-02T00:24:07.539031800Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:24:09.237607 containerd[1446]: time="2024-07-02T00:24:09.237427937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:09.239085 containerd[1446]: time="2024-07-02T00:24:09.238926908Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778128" Jul 2 00:24:09.240750 containerd[1446]: time="2024-07-02T00:24:09.240691049Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:09.246029 containerd[1446]: time="2024-07-02T00:24:09.245976067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:09.249480 containerd[1446]: time="2024-07-02T00:24:09.248801509Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.709687981s" Jul 2 00:24:09.249480 containerd[1446]: time="2024-07-02T00:24:09.248894740Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 00:24:09.273667 containerd[1446]: time="2024-07-02T00:24:09.273620437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:24:09.560627 update_engine[1432]: I0702 00:24:09.559798 1432 update_attempter.cc:509] Updating boot flags... Jul 2 00:24:09.627158 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1944) Jul 2 00:24:09.702127 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1945) Jul 2 00:24:09.769423 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1945) Jul 2 00:24:11.003806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728896012.mount: Deactivated successfully. Jul 2 00:24:11.636725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:24:11.642536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:12.701244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:12.717997 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:12.917495 containerd[1446]: time="2024-07-02T00:24:12.917320177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:12.920904 containerd[1446]: time="2024-07-02T00:24:12.920152537Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035446" Jul 2 00:24:12.922249 containerd[1446]: time="2024-07-02T00:24:12.921963891Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:12.930631 containerd[1446]: time="2024-07-02T00:24:12.930465489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:12.934699 containerd[1446]: time="2024-07-02T00:24:12.932583971Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 3.658896833s" Jul 2 00:24:12.934699 containerd[1446]: time="2024-07-02T00:24:12.932687551Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 00:24:12.980794 kubelet[1968]: E0702 00:24:12.980551 1968 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:12.986190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:12.986336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:12.996153 containerd[1446]: time="2024-07-02T00:24:12.995850830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:24:13.745385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264984727.mount: Deactivated successfully. Jul 2 00:24:15.149789 containerd[1446]: time="2024-07-02T00:24:15.149668768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:15.154416 containerd[1446]: time="2024-07-02T00:24:15.154283687Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jul 2 00:24:15.157554 containerd[1446]: time="2024-07-02T00:24:15.157478723Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:15.165352 containerd[1446]: time="2024-07-02T00:24:15.165297907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:15.169550 containerd[1446]: time="2024-07-02T00:24:15.169015950Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.173101316s" Jul 2 00:24:15.169550 containerd[1446]: time="2024-07-02T00:24:15.169195273Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:24:15.202800 containerd[1446]: time="2024-07-02T00:24:15.202736669Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:24:16.015028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035776894.mount: Deactivated successfully. Jul 2 00:24:16.024907 containerd[1446]: time="2024-07-02T00:24:16.024693570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.027118 containerd[1446]: time="2024-07-02T00:24:16.026952519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jul 2 00:24:16.028824 containerd[1446]: time="2024-07-02T00:24:16.028674545Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.035378 containerd[1446]: time="2024-07-02T00:24:16.035316599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.038142 containerd[1446]: time="2024-07-02T00:24:16.037713864Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 834.893622ms" Jul 2 00:24:16.038142 containerd[1446]: time="2024-07-02T00:24:16.037794520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:24:16.077926 containerd[1446]: time="2024-07-02T00:24:16.077852515Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:24:16.714760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250764536.mount: Deactivated successfully. Jul 2 00:24:19.881836 containerd[1446]: time="2024-07-02T00:24:19.881607653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:19.883229 containerd[1446]: time="2024-07-02T00:24:19.883189742Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jul 2 00:24:19.883832 containerd[1446]: time="2024-07-02T00:24:19.883756271Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:19.888220 containerd[1446]: time="2024-07-02T00:24:19.888183713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:19.893821 containerd[1446]: time="2024-07-02T00:24:19.893658108Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.815729466s" Jul 2 00:24:19.893821 containerd[1446]: time="2024-07-02T00:24:19.893705364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 00:24:23.136764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 00:24:23.146567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:23.559574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:23.568627 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:23.613344 kubelet[2150]: E0702 00:24:23.613191 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:23.615249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:23.615495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:25.140566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:25.149365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:25.194667 systemd[1]: Reloading requested from client PID 2166 ('systemctl') (unit session-11.scope)... Jul 2 00:24:25.194722 systemd[1]: Reloading... Jul 2 00:24:25.282120 zram_generator::config[2200]: No configuration found. Jul 2 00:24:25.685678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:24:25.785182 systemd[1]: Reloading finished in 589 ms. Jul 2 00:24:25.838863 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:24:25.839160 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:24:25.839543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:25.845459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:25.998127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:26.018418 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:24:26.322811 kubelet[2268]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:26.322811 kubelet[2268]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:24:26.322811 kubelet[2268]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:26.326103 kubelet[2268]: I0702 00:24:26.325741 2268 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:24:26.768150 kubelet[2268]: I0702 00:24:26.768088 2268 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:24:26.768150 kubelet[2268]: I0702 00:24:26.768135 2268 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:24:26.768375 kubelet[2268]: I0702 00:24:26.768356 2268 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:24:27.606450 kubelet[2268]: I0702 00:24:27.605139 2268 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:27.606450 kubelet[2268]: E0702 00:24:27.606358 2268 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.646116 kubelet[2268]: I0702 00:24:27.645015 2268 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:24:27.646116 kubelet[2268]: I0702 00:24:27.645540 2268 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:24:27.646116 kubelet[2268]: I0702 00:24:27.645602 2268 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975-1-1-5-578c77618a.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:24:27.646116 kubelet[2268]: I0702 00:24:27.646006 2268 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:24:27.646585 kubelet[2268]: I0702 00:24:27.646031 2268 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:24:27.646915 kubelet[2268]: I0702 00:24:27.646886 2268 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:27.649205 kubelet[2268]: I0702 00:24:27.649175 2268 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:24:27.649380 kubelet[2268]: I0702 00:24:27.649358 2268 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:24:27.649536 kubelet[2268]: I0702 00:24:27.649516 2268 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:24:27.649702 kubelet[2268]: I0702 00:24:27.649680 2268 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:24:27.656361 kubelet[2268]: W0702 00:24:27.656271 2268 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-1-1-5-578c77618a.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.656683 kubelet[2268]: E0702 00:24:27.656622 2268 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-1-1-5-578c77618a.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.657088 kubelet[2268]: W0702 00:24:27.656986 2268 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.657881 kubelet[2268]: E0702 00:24:27.657287 2268 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.657881 kubelet[2268]: I0702 00:24:27.657475 2268 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:24:27.663133 kubelet[2268]: I0702 00:24:27.661860 2268 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:24:27.663133 kubelet[2268]: W0702 00:24:27.661976 2268 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:24:27.664039 kubelet[2268]: I0702 00:24:27.664006 2268 server.go:1264] "Started kubelet" Jul 2 00:24:27.677608 kubelet[2268]: I0702 00:24:27.676312 2268 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:24:27.678397 kubelet[2268]: I0702 00:24:27.678303 2268 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:24:27.679208 kubelet[2268]: I0702 00:24:27.679175 2268 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:24:27.679425 kubelet[2268]: I0702 00:24:27.678428 2268 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:24:27.682627 kubelet[2268]: E0702 00:24:27.682413 2268 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.162:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-1-1-5-578c77618a.novalocal.17de3da09874be0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-1-1-5-578c77618a.novalocal,UID:ci-3975-1-1-5-578c77618a.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-1-1-5-578c77618a.novalocal,},FirstTimestamp:2024-07-02 00:24:27.663957519 +0000 UTC m=+1.641723188,LastTimestamp:2024-07-02 00:24:27.663957519 +0000 UTC m=+1.641723188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-1-1-5-578c77618a.novalocal,}" Jul 2 00:24:27.683830 kubelet[2268]: I0702 00:24:27.683784 2268 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:24:27.684355 kubelet[2268]: I0702 00:24:27.684327 2268 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:24:27.690316 kubelet[2268]: I0702 00:24:27.690232 2268 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:24:27.690745 kubelet[2268]: I0702 00:24:27.690723 2268 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:24:27.692258 kubelet[2268]: W0702 00:24:27.692145 2268 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.692543 kubelet[2268]: E0702 00:24:27.692441 2268 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.694896 kubelet[2268]: E0702 00:24:27.694858 2268 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-1-1-5-578c77618a.novalocal\" not found" Jul 2 00:24:27.695807 kubelet[2268]: E0702 00:24:27.695550 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-1-1-5-578c77618a.novalocal?timeout=10s\": dial tcp 172.24.4.162:6443: connect: connection refused" interval="200ms" Jul 2 00:24:27.706133 kubelet[2268]: I0702 00:24:27.704624 2268 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:24:27.706133 kubelet[2268]: I0702 00:24:27.704945 2268 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:24:27.708163 kubelet[2268]: E0702 00:24:27.708042 2268 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:24:27.708828 kubelet[2268]: I0702 00:24:27.708783 2268 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:24:27.740145 kubelet[2268]: I0702 00:24:27.740124 2268 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:24:27.740245 kubelet[2268]: I0702 00:24:27.740236 2268 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:24:27.740328 kubelet[2268]: I0702 00:24:27.740320 2268 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:27.744793 kubelet[2268]: I0702 00:24:27.744764 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:24:27.746022 kubelet[2268]: I0702 00:24:27.746002 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:24:27.746179 kubelet[2268]: I0702 00:24:27.746166 2268 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:24:27.746253 kubelet[2268]: I0702 00:24:27.746245 2268 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:24:27.746350 kubelet[2268]: E0702 00:24:27.746332 2268 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:24:27.747252 kubelet[2268]: I0702 00:24:27.747239 2268 policy_none.go:49] "None policy: Start" Jul 2 00:24:27.748119 kubelet[2268]: I0702 00:24:27.748101 2268 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:24:27.748196 kubelet[2268]: I0702 00:24:27.748188 2268 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:24:27.761248 kubelet[2268]: W0702 00:24:27.761161 2268 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.761248 kubelet[2268]: E0702 00:24:27.761240 2268 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:27.764960 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:24:27.778323 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:24:27.794226 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:24:27.795609 kubelet[2268]: I0702 00:24:27.795575 2268 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:24:27.795825 kubelet[2268]: I0702 00:24:27.795780 2268 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:24:27.796006 kubelet[2268]: I0702 00:24:27.795928 2268 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:24:27.798670 kubelet[2268]: E0702 00:24:27.798613 2268 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-1-1-5-578c77618a.novalocal\" not found" Jul 2 00:24:27.799393 kubelet[2268]: I0702 00:24:27.798971 2268 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.799393 kubelet[2268]: E0702 00:24:27.799366 2268 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.162:6443/api/v1/nodes\": dial tcp 172.24.4.162:6443: connect: connection refused" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.847690 kubelet[2268]: I0702 00:24:27.847403 2268 topology_manager.go:215] "Topology Admit Handler" podUID="f5ee9e7cb525f8b1a7d683090415beee" podNamespace="kube-system" podName="kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.849091 kubelet[2268]: I0702 00:24:27.849070 2268 topology_manager.go:215] "Topology Admit Handler" podUID="0342299205a246e9bd0dd39e89cc107e" podNamespace="kube-system" podName="kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.850640 kubelet[2268]: I0702 00:24:27.850557 2268 topology_manager.go:215] "Topology Admit Handler" podUID="8f1c9b39988cada6698504295a435395" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.859032 systemd[1]: Created slice kubepods-burstable-podf5ee9e7cb525f8b1a7d683090415beee.slice - libcontainer container kubepods-burstable-podf5ee9e7cb525f8b1a7d683090415beee.slice. Jul 2 00:24:27.875868 systemd[1]: Created slice kubepods-burstable-pod0342299205a246e9bd0dd39e89cc107e.slice - libcontainer container kubepods-burstable-pod0342299205a246e9bd0dd39e89cc107e.slice. Jul 2 00:24:27.880381 systemd[1]: Created slice kubepods-burstable-pod8f1c9b39988cada6698504295a435395.slice - libcontainer container kubepods-burstable-pod8f1c9b39988cada6698504295a435395.slice. Jul 2 00:24:27.892616 kubelet[2268]: I0702 00:24:27.892377 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.892616 kubelet[2268]: I0702 00:24:27.892411 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-ca-certs\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.892616 kubelet[2268]: I0702 00:24:27.892435 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-k8s-certs\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.892616 kubelet[2268]: I0702 00:24:27.892454 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-kubeconfig\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.892761 kubelet[2268]: I0702 00:24:27.892475 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.892761 kubelet[2268]: I0702 00:24:27.892494 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5ee9e7cb525f8b1a7d683090415beee-kubeconfig\") pod \"kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"f5ee9e7cb525f8b1a7d683090415beee\") " pod="kube-system/kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.892761 kubelet[2268]: I0702 00:24:27.892511 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0342299205a246e9bd0dd39e89cc107e-ca-certs\") pod \"kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"0342299205a246e9bd0dd39e89cc107e\") " pod="kube-system/kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.892761 kubelet[2268]: I0702 00:24:27.892527 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0342299205a246e9bd0dd39e89cc107e-k8s-certs\") pod \"kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"0342299205a246e9bd0dd39e89cc107e\") " pod="kube-system/kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.892865 kubelet[2268]: I0702 00:24:27.892549 2268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0342299205a246e9bd0dd39e89cc107e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"0342299205a246e9bd0dd39e89cc107e\") " pod="kube-system/kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:27.896120 kubelet[2268]: E0702 00:24:27.896069 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-1-1-5-578c77618a.novalocal?timeout=10s\": dial tcp 172.24.4.162:6443: connect: connection refused" interval="400ms" Jul 2 00:24:28.002940 kubelet[2268]: I0702 00:24:28.002870 2268 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:28.003800 kubelet[2268]: E0702 00:24:28.003735 2268 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.162:6443/api/v1/nodes\": dial tcp 172.24.4.162:6443: connect: connection refused" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:28.175613 containerd[1446]: time="2024-07-02T00:24:28.175424276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal,Uid:f5ee9e7cb525f8b1a7d683090415beee,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:28.194231 containerd[1446]: time="2024-07-02T00:24:28.194163860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal,Uid:0342299205a246e9bd0dd39e89cc107e,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:28.199429 containerd[1446]: time="2024-07-02T00:24:28.198458467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal,Uid:8f1c9b39988cada6698504295a435395,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:28.297576 kubelet[2268]: E0702 00:24:28.297494 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-1-1-5-578c77618a.novalocal?timeout=10s\": dial tcp 172.24.4.162:6443: connect: connection refused" interval="800ms" Jul 2 00:24:28.406536 kubelet[2268]: I0702 00:24:28.406509 2268 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:28.407068 kubelet[2268]: E0702 00:24:28.407034 2268 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.162:6443/api/v1/nodes\": dial tcp 172.24.4.162:6443: connect: connection refused" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:28.516291 kubelet[2268]: W0702 00:24:28.515982 2268 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-1-1-5-578c77618a.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:28.516642 kubelet[2268]: E0702 00:24:28.516539 2268 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-1-1-5-578c77618a.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:28.770642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975598170.mount: Deactivated successfully. Jul 2 00:24:28.783676 containerd[1446]: time="2024-07-02T00:24:28.783564264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:28.786081 containerd[1446]: time="2024-07-02T00:24:28.785958044Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:24:28.787960 containerd[1446]: time="2024-07-02T00:24:28.787807176Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:28.791015 containerd[1446]: time="2024-07-02T00:24:28.790713611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 2 00:24:28.792706 containerd[1446]: time="2024-07-02T00:24:28.792298977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:24:28.792706 containerd[1446]: time="2024-07-02T00:24:28.792534147Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:28.800145 containerd[1446]: time="2024-07-02T00:24:28.799932431Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:28.803455 containerd[1446]: time="2024-07-02T00:24:28.802358466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.597335ms" Jul 2 00:24:28.811722 containerd[1446]: time="2024-07-02T00:24:28.808872464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:28.811974 kubelet[2268]: W0702 00:24:28.810965 2268 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:28.811974 kubelet[2268]: E0702 00:24:28.811251 2268 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:28.813268 containerd[1446]: time="2024-07-02T00:24:28.813145828Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.491352ms" Jul 2 00:24:28.813940 kubelet[2268]: W0702 00:24:28.813694 2268 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:28.813940 kubelet[2268]: E0702 00:24:28.813893 2268 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:28.830628 containerd[1446]: time="2024-07-02T00:24:28.830511157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.910836ms" Jul 2 00:24:29.109105 kubelet[2268]: E0702 00:24:29.098702 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-1-1-5-578c77618a.novalocal?timeout=10s\": dial tcp 172.24.4.162:6443: connect: connection refused" interval="1.6s" Jul 2 00:24:29.126398 containerd[1446]: time="2024-07-02T00:24:29.126281422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:29.126588 containerd[1446]: time="2024-07-02T00:24:29.126555318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:29.126720 containerd[1446]: time="2024-07-02T00:24:29.126689716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:29.126843 containerd[1446]: time="2024-07-02T00:24:29.126813632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:29.131750 containerd[1446]: time="2024-07-02T00:24:29.131523860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:29.131750 containerd[1446]: time="2024-07-02T00:24:29.131577317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:29.131750 containerd[1446]: time="2024-07-02T00:24:29.131601024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:29.131750 containerd[1446]: time="2024-07-02T00:24:29.131619010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:29.141689 containerd[1446]: time="2024-07-02T00:24:29.141595414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:29.141925 containerd[1446]: time="2024-07-02T00:24:29.141876555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:29.142047 containerd[1446]: time="2024-07-02T00:24:29.142014088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:29.142322 containerd[1446]: time="2024-07-02T00:24:29.142280279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:29.178274 systemd[1]: Started cri-containerd-306ed53234143dd16ba23b690557cdf0a80d285d0bf665b2c75372f5c4a83557.scope - libcontainer container 306ed53234143dd16ba23b690557cdf0a80d285d0bf665b2c75372f5c4a83557. Jul 2 00:24:29.181163 systemd[1]: Started cri-containerd-4aa8eea76b9cc5b431dfd2340407593840c8b332727bcb63ccf6f40304fdfa37.scope - libcontainer container 4aa8eea76b9cc5b431dfd2340407593840c8b332727bcb63ccf6f40304fdfa37. Jul 2 00:24:29.215389 kubelet[2268]: I0702 00:24:29.210583 2268 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:29.215389 kubelet[2268]: E0702 00:24:29.211397 2268 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.162:6443/api/v1/nodes\": dial tcp 172.24.4.162:6443: connect: connection refused" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:29.184236 systemd[1]: Started cri-containerd-9fbc0ab697ac9be5198b9280ddae4f62fd36f24a08c890f5f81873eb5238589a.scope - libcontainer container 9fbc0ab697ac9be5198b9280ddae4f62fd36f24a08c890f5f81873eb5238589a. Jul 2 00:24:29.277235 kubelet[2268]: W0702 00:24:29.276993 2268 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:29.277235 kubelet[2268]: E0702 00:24:29.277083 2268 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:29.279760 containerd[1446]: time="2024-07-02T00:24:29.279364117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal,Uid:8f1c9b39988cada6698504295a435395,Namespace:kube-system,Attempt:0,} returns sandbox id \"306ed53234143dd16ba23b690557cdf0a80d285d0bf665b2c75372f5c4a83557\"" Jul 2 00:24:29.283119 containerd[1446]: time="2024-07-02T00:24:29.282952278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal,Uid:0342299205a246e9bd0dd39e89cc107e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4aa8eea76b9cc5b431dfd2340407593840c8b332727bcb63ccf6f40304fdfa37\"" Jul 2 00:24:29.285498 containerd[1446]: time="2024-07-02T00:24:29.285279075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal,Uid:f5ee9e7cb525f8b1a7d683090415beee,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fbc0ab697ac9be5198b9280ddae4f62fd36f24a08c890f5f81873eb5238589a\"" Jul 2 00:24:29.288656 containerd[1446]: time="2024-07-02T00:24:29.288359936Z" level=info msg="CreateContainer within sandbox \"306ed53234143dd16ba23b690557cdf0a80d285d0bf665b2c75372f5c4a83557\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:24:29.288656 containerd[1446]: time="2024-07-02T00:24:29.288513833Z" level=info msg="CreateContainer within sandbox \"4aa8eea76b9cc5b431dfd2340407593840c8b332727bcb63ccf6f40304fdfa37\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:24:29.289691 containerd[1446]: time="2024-07-02T00:24:29.289668915Z" level=info msg="CreateContainer within sandbox \"9fbc0ab697ac9be5198b9280ddae4f62fd36f24a08c890f5f81873eb5238589a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:24:29.336141 containerd[1446]: time="2024-07-02T00:24:29.336026148Z" level=info msg="CreateContainer within sandbox \"4aa8eea76b9cc5b431dfd2340407593840c8b332727bcb63ccf6f40304fdfa37\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"82b4f048300a6a7d257816a31910ea8a79c253882e0d5d1ca84488a52bf2b95e\"" Jul 2 00:24:29.340005 containerd[1446]: time="2024-07-02T00:24:29.339920380Z" level=info msg="CreateContainer within sandbox \"306ed53234143dd16ba23b690557cdf0a80d285d0bf665b2c75372f5c4a83557\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"47274a6681b48d86031f181fb9134c1b6fb8720b4f771c4463c1319fcbce6ea9\"" Jul 2 00:24:29.340648 containerd[1446]: time="2024-07-02T00:24:29.340179536Z" level=info msg="StartContainer for \"82b4f048300a6a7d257816a31910ea8a79c253882e0d5d1ca84488a52bf2b95e\"" Jul 2 00:24:29.343288 containerd[1446]: time="2024-07-02T00:24:29.343236920Z" level=info msg="CreateContainer within sandbox \"9fbc0ab697ac9be5198b9280ddae4f62fd36f24a08c890f5f81873eb5238589a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"500192fd5af5d10c876b4a8745f3bbbc69419b946e0262d70254f1ef7c413bc8\"" Jul 2 00:24:29.344283 containerd[1446]: time="2024-07-02T00:24:29.343379133Z" level=info msg="StartContainer for \"47274a6681b48d86031f181fb9134c1b6fb8720b4f771c4463c1319fcbce6ea9\"" Jul 2 00:24:29.358195 containerd[1446]: time="2024-07-02T00:24:29.357022114Z" level=info msg="StartContainer for \"500192fd5af5d10c876b4a8745f3bbbc69419b946e0262d70254f1ef7c413bc8\"" Jul 2 00:24:29.384395 systemd[1]: Started cri-containerd-82b4f048300a6a7d257816a31910ea8a79c253882e0d5d1ca84488a52bf2b95e.scope - libcontainer container 82b4f048300a6a7d257816a31910ea8a79c253882e0d5d1ca84488a52bf2b95e. Jul 2 00:24:29.402374 systemd[1]: Started cri-containerd-47274a6681b48d86031f181fb9134c1b6fb8720b4f771c4463c1319fcbce6ea9.scope - libcontainer container 47274a6681b48d86031f181fb9134c1b6fb8720b4f771c4463c1319fcbce6ea9. Jul 2 00:24:29.414227 systemd[1]: Started cri-containerd-500192fd5af5d10c876b4a8745f3bbbc69419b946e0262d70254f1ef7c413bc8.scope - libcontainer container 500192fd5af5d10c876b4a8745f3bbbc69419b946e0262d70254f1ef7c413bc8. Jul 2 00:24:29.473900 containerd[1446]: time="2024-07-02T00:24:29.473739802Z" level=info msg="StartContainer for \"82b4f048300a6a7d257816a31910ea8a79c253882e0d5d1ca84488a52bf2b95e\" returns successfully" Jul 2 00:24:29.498122 containerd[1446]: time="2024-07-02T00:24:29.497017486Z" level=info msg="StartContainer for \"47274a6681b48d86031f181fb9134c1b6fb8720b4f771c4463c1319fcbce6ea9\" returns successfully" Jul 2 00:24:29.511315 containerd[1446]: time="2024-07-02T00:24:29.511263178Z" level=info msg="StartContainer for \"500192fd5af5d10c876b4a8745f3bbbc69419b946e0262d70254f1ef7c413bc8\" returns successfully" Jul 2 00:24:29.618075 kubelet[2268]: E0702 00:24:29.617911 2268 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.162:6443: connect: connection refused Jul 2 00:24:30.813506 kubelet[2268]: I0702 00:24:30.813428 2268 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:32.118187 kubelet[2268]: E0702 00:24:32.118125 2268 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-1-1-5-578c77618a.novalocal\" not found" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:32.298427 kubelet[2268]: I0702 00:24:32.298328 2268 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:32.654161 kubelet[2268]: I0702 00:24:32.653989 2268 apiserver.go:52] "Watching apiserver" Jul 2 00:24:32.691394 kubelet[2268]: I0702 00:24:32.691306 2268 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:24:34.291451 systemd[1]: Reloading requested from client PID 2542 ('systemctl') (unit session-11.scope)... Jul 2 00:24:34.291484 systemd[1]: Reloading... Jul 2 00:24:34.413324 zram_generator::config[2579]: No configuration found. Jul 2 00:24:34.596041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:24:34.706831 systemd[1]: Reloading finished in 414 ms. Jul 2 00:24:34.770720 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:34.787520 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:24:34.788024 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:34.788088 systemd[1]: kubelet.service: Consumed 1.213s CPU time, 110.1M memory peak, 0B memory swap peak. Jul 2 00:24:34.792398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:35.076468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:35.085334 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:24:35.319336 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:35.319336 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:24:35.319336 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:35.325119 kubelet[2643]: I0702 00:24:35.325016 2643 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:24:35.332675 kubelet[2643]: I0702 00:24:35.332511 2643 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:24:35.332675 kubelet[2643]: I0702 00:24:35.332541 2643 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:24:35.332928 kubelet[2643]: I0702 00:24:35.332777 2643 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:24:35.335098 kubelet[2643]: I0702 00:24:35.334380 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:24:35.340910 kubelet[2643]: I0702 00:24:35.340755 2643 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:35.360310 kubelet[2643]: I0702 00:24:35.360031 2643 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:24:35.360416 kubelet[2643]: I0702 00:24:35.360332 2643 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:24:35.360582 kubelet[2643]: I0702 00:24:35.360379 2643 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975-1-1-5-578c77618a.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:24:35.360695 kubelet[2643]: I0702 00:24:35.360596 2643 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:24:35.360695 kubelet[2643]: I0702 00:24:35.360609 2643 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:24:35.360695 kubelet[2643]: I0702 00:24:35.360647 2643 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:35.361641 kubelet[2643]: I0702 00:24:35.361118 2643 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:24:35.361641 kubelet[2643]: I0702 00:24:35.361407 2643 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:24:35.361641 kubelet[2643]: I0702 00:24:35.361434 2643 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:24:35.361641 kubelet[2643]: I0702 00:24:35.361448 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:24:35.365593 kubelet[2643]: I0702 00:24:35.365373 2643 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:24:35.369903 kubelet[2643]: I0702 00:24:35.367865 2643 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:24:35.369903 kubelet[2643]: I0702 00:24:35.368350 2643 server.go:1264] "Started kubelet" Jul 2 00:24:35.371682 kubelet[2643]: I0702 00:24:35.371668 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:24:35.382571 kubelet[2643]: I0702 00:24:35.382448 2643 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:24:35.384035 kubelet[2643]: I0702 00:24:35.383708 2643 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:24:35.385405 sudo[2657]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:24:35.385808 sudo[2657]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:24:35.390359 kubelet[2643]: I0702 00:24:35.390282 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:24:35.391199 kubelet[2643]: I0702 00:24:35.391159 2643 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:24:35.395355 kubelet[2643]: I0702 00:24:35.395337 2643 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:24:35.397784 kubelet[2643]: I0702 00:24:35.397469 2643 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:24:35.400430 kubelet[2643]: I0702 00:24:35.400410 2643 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:24:35.401102 kubelet[2643]: I0702 00:24:35.400634 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:24:35.406144 kubelet[2643]: I0702 00:24:35.406122 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:24:35.406261 kubelet[2643]: I0702 00:24:35.406250 2643 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:24:35.406330 kubelet[2643]: I0702 00:24:35.406321 2643 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:24:35.406441 kubelet[2643]: E0702 00:24:35.406421 2643 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:24:35.434102 kubelet[2643]: E0702 00:24:35.433693 2643 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:24:35.435514 kubelet[2643]: I0702 00:24:35.435213 2643 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:24:35.435514 kubelet[2643]: I0702 00:24:35.435231 2643 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:24:35.436088 kubelet[2643]: I0702 00:24:35.436018 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:24:35.494816 kubelet[2643]: I0702 00:24:35.494795 2643 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:24:35.495005 kubelet[2643]: I0702 00:24:35.494995 2643 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:24:35.495120 kubelet[2643]: I0702 00:24:35.495111 2643 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:35.495309 kubelet[2643]: I0702 00:24:35.495296 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:24:35.495417 kubelet[2643]: I0702 00:24:35.495365 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:24:35.495417 kubelet[2643]: I0702 00:24:35.495389 2643 policy_none.go:49] "None policy: Start" Jul 2 00:24:35.496119 kubelet[2643]: I0702 00:24:35.495998 2643 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:24:35.496119 kubelet[2643]: I0702 00:24:35.496017 2643 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:24:35.496649 kubelet[2643]: I0702 00:24:35.496355 2643 state_mem.go:75] "Updated machine memory state" Jul 2 00:24:35.502397 kubelet[2643]: I0702 00:24:35.502376 2643 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:24:35.503853 kubelet[2643]: I0702 00:24:35.503039 2643 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:24:35.505082 kubelet[2643]: I0702 00:24:35.504177 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:24:35.506231 kubelet[2643]: I0702 00:24:35.502536 2643 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.507033 kubelet[2643]: I0702 00:24:35.506588 2643 topology_manager.go:215] "Topology Admit Handler" podUID="f5ee9e7cb525f8b1a7d683090415beee" podNamespace="kube-system" podName="kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.508122 kubelet[2643]: I0702 00:24:35.507399 2643 topology_manager.go:215] "Topology Admit Handler" podUID="0342299205a246e9bd0dd39e89cc107e" podNamespace="kube-system" podName="kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.508122 kubelet[2643]: I0702 00:24:35.507460 2643 topology_manager.go:215] "Topology Admit Handler" podUID="8f1c9b39988cada6698504295a435395" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.534772 kubelet[2643]: W0702 00:24:35.534743 2643 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:35.536145 kubelet[2643]: W0702 00:24:35.534922 2643 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:35.541189 kubelet[2643]: W0702 00:24:35.539787 2643 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:35.541547 kubelet[2643]: I0702 00:24:35.541517 2643 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.541590 kubelet[2643]: I0702 00:24:35.541585 2643 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602743 kubelet[2643]: I0702 00:24:35.602308 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-k8s-certs\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602743 kubelet[2643]: I0702 00:24:35.602352 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602743 kubelet[2643]: I0702 00:24:35.602389 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5ee9e7cb525f8b1a7d683090415beee-kubeconfig\") pod \"kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"f5ee9e7cb525f8b1a7d683090415beee\") " pod="kube-system/kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602743 kubelet[2643]: I0702 00:24:35.602411 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0342299205a246e9bd0dd39e89cc107e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"0342299205a246e9bd0dd39e89cc107e\") " pod="kube-system/kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602936 kubelet[2643]: I0702 00:24:35.602431 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602936 kubelet[2643]: I0702 00:24:35.602449 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-kubeconfig\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602936 kubelet[2643]: I0702 00:24:35.602475 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0342299205a246e9bd0dd39e89cc107e-ca-certs\") pod \"kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"0342299205a246e9bd0dd39e89cc107e\") " pod="kube-system/kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602936 kubelet[2643]: I0702 00:24:35.602494 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0342299205a246e9bd0dd39e89cc107e-k8s-certs\") pod \"kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"0342299205a246e9bd0dd39e89cc107e\") " pod="kube-system/kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:35.602936 kubelet[2643]: I0702 00:24:35.602512 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f1c9b39988cada6698504295a435395-ca-certs\") pod \"kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal\" (UID: \"8f1c9b39988cada6698504295a435395\") " pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:36.365380 kubelet[2643]: I0702 00:24:36.364311 2643 apiserver.go:52] "Watching apiserver" Jul 2 00:24:36.367505 sudo[2657]: pam_unix(sudo:session): session closed for user root Jul 2 00:24:36.399293 kubelet[2643]: I0702 00:24:36.399249 2643 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:24:36.477170 kubelet[2643]: W0702 00:24:36.477137 2643 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:36.477560 kubelet[2643]: E0702 00:24:36.477205 2643 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" Jul 2 00:24:36.500102 kubelet[2643]: I0702 00:24:36.500030 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-1-1-5-578c77618a.novalocal" podStartSLOduration=1.500014587 podStartE2EDuration="1.500014587s" podCreationTimestamp="2024-07-02 00:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:36.491870872 +0000 UTC m=+1.399362202" watchObservedRunningTime="2024-07-02 00:24:36.500014587 +0000 UTC m=+1.407505897" Jul 2 00:24:36.518820 kubelet[2643]: I0702 00:24:36.518748 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-1-1-5-578c77618a.novalocal" podStartSLOduration=1.5187272 podStartE2EDuration="1.5187272s" podCreationTimestamp="2024-07-02 00:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:36.502362436 +0000 UTC m=+1.409853736" watchObservedRunningTime="2024-07-02 00:24:36.5187272 +0000 UTC m=+1.426218510" Jul 2 00:24:36.528631 kubelet[2643]: I0702 00:24:36.528328 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-1-1-5-578c77618a.novalocal" podStartSLOduration=1.52800402 podStartE2EDuration="1.52800402s" podCreationTimestamp="2024-07-02 00:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:36.51940656 +0000 UTC m=+1.426897870" watchObservedRunningTime="2024-07-02 00:24:36.52800402 +0000 UTC m=+1.435495320" Jul 2 00:24:39.449353 sudo[1702]: pam_unix(sudo:session): session closed for user root Jul 2 00:24:39.683729 sshd[1699]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:39.689510 systemd[1]: sshd@8-172.24.4.162:22-172.24.4.1:49704.service: Deactivated successfully. Jul 2 00:24:39.693473 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:24:39.694142 systemd[1]: session-11.scope: Consumed 8.722s CPU time, 137.3M memory peak, 0B memory swap peak. Jul 2 00:24:39.698434 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:24:39.701556 systemd-logind[1429]: Removed session 11. Jul 2 00:24:48.771123 kubelet[2643]: I0702 00:24:48.771015 2643 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:24:48.771873 containerd[1446]: time="2024-07-02T00:24:48.771680894Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:24:48.772376 kubelet[2643]: I0702 00:24:48.771871 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:24:48.952245 kubelet[2643]: I0702 00:24:48.950018 2643 topology_manager.go:215] "Topology Admit Handler" podUID="e95d12c5-3a64-41de-961e-e9463adf6bc3" podNamespace="kube-system" podName="kube-proxy-mcn64" Jul 2 00:24:48.957185 kubelet[2643]: W0702 00:24:48.957112 2643 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:24:48.957185 kubelet[2643]: W0702 00:24:48.957151 2643 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:24:48.957380 kubelet[2643]: E0702 00:24:48.957194 2643 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:24:48.957380 kubelet[2643]: E0702 00:24:48.957159 2643 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:24:48.960493 systemd[1]: Created slice kubepods-besteffort-pode95d12c5_3a64_41de_961e_e9463adf6bc3.slice - libcontainer container kubepods-besteffort-pode95d12c5_3a64_41de_961e_e9463adf6bc3.slice. Jul 2 00:24:48.967374 kubelet[2643]: I0702 00:24:48.966465 2643 topology_manager.go:215] "Topology Admit Handler" podUID="cb0fbaca-7023-468b-ab48-81ede1d0801c" podNamespace="kube-system" podName="cilium-vf8rp" Jul 2 00:24:48.975509 systemd[1]: Created slice kubepods-burstable-podcb0fbaca_7023_468b_ab48_81ede1d0801c.slice - libcontainer container kubepods-burstable-podcb0fbaca_7023_468b_ab48_81ede1d0801c.slice. Jul 2 00:24:48.990794 kubelet[2643]: I0702 00:24:48.990537 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e95d12c5-3a64-41de-961e-e9463adf6bc3-kube-proxy\") pod \"kube-proxy-mcn64\" (UID: \"e95d12c5-3a64-41de-961e-e9463adf6bc3\") " pod="kube-system/kube-proxy-mcn64" Jul 2 00:24:48.990794 kubelet[2643]: I0702 00:24:48.990584 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e95d12c5-3a64-41de-961e-e9463adf6bc3-lib-modules\") pod \"kube-proxy-mcn64\" (UID: \"e95d12c5-3a64-41de-961e-e9463adf6bc3\") " pod="kube-system/kube-proxy-mcn64" Jul 2 00:24:48.990794 kubelet[2643]: I0702 00:24:48.990615 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldzb\" (UniqueName: \"kubernetes.io/projected/e95d12c5-3a64-41de-961e-e9463adf6bc3-kube-api-access-cldzb\") pod \"kube-proxy-mcn64\" (UID: \"e95d12c5-3a64-41de-961e-e9463adf6bc3\") " pod="kube-system/kube-proxy-mcn64" Jul 2 00:24:48.990794 kubelet[2643]: I0702 00:24:48.990684 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-run\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.990794 kubelet[2643]: I0702 00:24:48.990708 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-hostproc\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.991129 kubelet[2643]: I0702 00:24:48.990728 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgn5l\" (UniqueName: \"kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-kube-api-access-wgn5l\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.991129 kubelet[2643]: I0702 00:24:48.990936 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e95d12c5-3a64-41de-961e-e9463adf6bc3-xtables-lock\") pod \"kube-proxy-mcn64\" (UID: \"e95d12c5-3a64-41de-961e-e9463adf6bc3\") " pod="kube-system/kube-proxy-mcn64" Jul 2 00:24:48.991129 kubelet[2643]: I0702 00:24:48.991003 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-config-path\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.991224 kubelet[2643]: I0702 00:24:48.991183 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cni-path\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.991254 kubelet[2643]: I0702 00:24:48.991237 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-xtables-lock\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.991279 kubelet[2643]: I0702 00:24:48.991269 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-host-proc-sys-kernel\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.991442 kubelet[2643]: I0702 00:24:48.991412 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-cgroup\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.991493 kubelet[2643]: I0702 00:24:48.991441 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-bpf-maps\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.992103 kubelet[2643]: I0702 00:24:48.991537 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-etc-cni-netd\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.992103 kubelet[2643]: I0702 00:24:48.991690 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-lib-modules\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.992103 kubelet[2643]: I0702 00:24:48.991720 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb0fbaca-7023-468b-ab48-81ede1d0801c-clustermesh-secrets\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.992103 kubelet[2643]: I0702 00:24:48.991775 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-host-proc-sys-net\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:48.992103 kubelet[2643]: I0702 00:24:48.991795 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-hubble-tls\") pod \"cilium-vf8rp\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " pod="kube-system/cilium-vf8rp" Jul 2 00:24:49.166076 kubelet[2643]: I0702 00:24:49.164253 2643 topology_manager.go:215] "Topology Admit Handler" podUID="fa81e257-5e09-45cc-8082-d337e7fa37d9" podNamespace="kube-system" podName="cilium-operator-599987898-xwxpv" Jul 2 00:24:49.172266 systemd[1]: Created slice kubepods-besteffort-podfa81e257_5e09_45cc_8082_d337e7fa37d9.slice - libcontainer container kubepods-besteffort-podfa81e257_5e09_45cc_8082_d337e7fa37d9.slice. Jul 2 00:24:49.194295 kubelet[2643]: I0702 00:24:49.194172 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa81e257-5e09-45cc-8082-d337e7fa37d9-cilium-config-path\") pod \"cilium-operator-599987898-xwxpv\" (UID: \"fa81e257-5e09-45cc-8082-d337e7fa37d9\") " pod="kube-system/cilium-operator-599987898-xwxpv" Jul 2 00:24:49.194295 kubelet[2643]: I0702 00:24:49.194212 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdtrt\" (UniqueName: \"kubernetes.io/projected/fa81e257-5e09-45cc-8082-d337e7fa37d9-kube-api-access-zdtrt\") pod \"cilium-operator-599987898-xwxpv\" (UID: \"fa81e257-5e09-45cc-8082-d337e7fa37d9\") " pod="kube-system/cilium-operator-599987898-xwxpv" Jul 2 00:24:50.155354 kubelet[2643]: E0702 00:24:50.155274 2643 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.155354 kubelet[2643]: E0702 00:24:50.155342 2643 projected.go:200] Error preparing data for projected volume kube-api-access-wgn5l for pod kube-system/cilium-vf8rp: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.156170 kubelet[2643]: E0702 00:24:50.155478 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-kube-api-access-wgn5l podName:cb0fbaca-7023-468b-ab48-81ede1d0801c nodeName:}" failed. No retries permitted until 2024-07-02 00:24:50.655440153 +0000 UTC m=+15.562931503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wgn5l" (UniqueName: "kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-kube-api-access-wgn5l") pod "cilium-vf8rp" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.157008 kubelet[2643]: E0702 00:24:50.156803 2643 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.157008 kubelet[2643]: E0702 00:24:50.156858 2643 projected.go:200] Error preparing data for projected volume kube-api-access-cldzb for pod kube-system/kube-proxy-mcn64: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.157008 kubelet[2643]: E0702 00:24:50.156958 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e95d12c5-3a64-41de-961e-e9463adf6bc3-kube-api-access-cldzb podName:e95d12c5-3a64-41de-961e-e9463adf6bc3 nodeName:}" failed. No retries permitted until 2024-07-02 00:24:50.656924306 +0000 UTC m=+15.564415656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cldzb" (UniqueName: "kubernetes.io/projected/e95d12c5-3a64-41de-961e-e9463adf6bc3-kube-api-access-cldzb") pod "kube-proxy-mcn64" (UID: "e95d12c5-3a64-41de-961e-e9463adf6bc3") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.303730 kubelet[2643]: E0702 00:24:50.303657 2643 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.303730 kubelet[2643]: E0702 00:24:50.303720 2643 projected.go:200] Error preparing data for projected volume kube-api-access-zdtrt for pod kube-system/cilium-operator-599987898-xwxpv: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.304042 kubelet[2643]: E0702 00:24:50.303813 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa81e257-5e09-45cc-8082-d337e7fa37d9-kube-api-access-zdtrt podName:fa81e257-5e09-45cc-8082-d337e7fa37d9 nodeName:}" failed. No retries permitted until 2024-07-02 00:24:50.803783184 +0000 UTC m=+15.711274534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zdtrt" (UniqueName: "kubernetes.io/projected/fa81e257-5e09-45cc-8082-d337e7fa37d9-kube-api-access-zdtrt") pod "cilium-operator-599987898-xwxpv" (UID: "fa81e257-5e09-45cc-8082-d337e7fa37d9") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:24:50.770722 containerd[1446]: time="2024-07-02T00:24:50.770574275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcn64,Uid:e95d12c5-3a64-41de-961e-e9463adf6bc3,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:50.783865 containerd[1446]: time="2024-07-02T00:24:50.783525438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vf8rp,Uid:cb0fbaca-7023-468b-ab48-81ede1d0801c,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:50.858522 containerd[1446]: time="2024-07-02T00:24:50.858263964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:50.858522 containerd[1446]: time="2024-07-02T00:24:50.858345603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:50.858522 containerd[1446]: time="2024-07-02T00:24:50.858371373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:50.858795 containerd[1446]: time="2024-07-02T00:24:50.858401973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:50.870335 containerd[1446]: time="2024-07-02T00:24:50.870242662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:50.871273 containerd[1446]: time="2024-07-02T00:24:50.871131504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:50.871273 containerd[1446]: time="2024-07-02T00:24:50.871208695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:50.871475 containerd[1446]: time="2024-07-02T00:24:50.871240987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:50.884663 systemd[1]: Started cri-containerd-e388c9adea0ee37affcb3f82b09519091aa84d99bcdb20297b7b0aa863634739.scope - libcontainer container e388c9adea0ee37affcb3f82b09519091aa84d99bcdb20297b7b0aa863634739. Jul 2 00:24:50.891512 systemd[1]: Started cri-containerd-754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324.scope - libcontainer container 754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324. Jul 2 00:24:50.923681 containerd[1446]: time="2024-07-02T00:24:50.923559594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcn64,Uid:e95d12c5-3a64-41de-961e-e9463adf6bc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e388c9adea0ee37affcb3f82b09519091aa84d99bcdb20297b7b0aa863634739\"" Jul 2 00:24:50.926353 containerd[1446]: time="2024-07-02T00:24:50.926257112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vf8rp,Uid:cb0fbaca-7023-468b-ab48-81ede1d0801c,Namespace:kube-system,Attempt:0,} returns sandbox id \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\"" Jul 2 00:24:50.927734 containerd[1446]: time="2024-07-02T00:24:50.927506588Z" level=info msg="CreateContainer within sandbox \"e388c9adea0ee37affcb3f82b09519091aa84d99bcdb20297b7b0aa863634739\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:24:50.940936 containerd[1446]: time="2024-07-02T00:24:50.940345472Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:24:50.958318 containerd[1446]: time="2024-07-02T00:24:50.958188160Z" level=info msg="CreateContainer within sandbox \"e388c9adea0ee37affcb3f82b09519091aa84d99bcdb20297b7b0aa863634739\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80207e3418beb0ce89252fb8824d03a2ec58d405452fadc511900233ca3520c2\"" Jul 2 00:24:50.960210 containerd[1446]: time="2024-07-02T00:24:50.960169702Z" level=info msg="StartContainer for \"80207e3418beb0ce89252fb8824d03a2ec58d405452fadc511900233ca3520c2\"" Jul 2 00:24:50.977402 containerd[1446]: time="2024-07-02T00:24:50.977351432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xwxpv,Uid:fa81e257-5e09-45cc-8082-d337e7fa37d9,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:50.992213 systemd[1]: Started cri-containerd-80207e3418beb0ce89252fb8824d03a2ec58d405452fadc511900233ca3520c2.scope - libcontainer container 80207e3418beb0ce89252fb8824d03a2ec58d405452fadc511900233ca3520c2. Jul 2 00:24:51.017600 containerd[1446]: time="2024-07-02T00:24:51.017129239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:51.018728 containerd[1446]: time="2024-07-02T00:24:51.018654660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:51.018728 containerd[1446]: time="2024-07-02T00:24:51.018687575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:51.018728 containerd[1446]: time="2024-07-02T00:24:51.018701542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:51.038433 containerd[1446]: time="2024-07-02T00:24:51.037550753Z" level=info msg="StartContainer for \"80207e3418beb0ce89252fb8824d03a2ec58d405452fadc511900233ca3520c2\" returns successfully" Jul 2 00:24:51.039758 systemd[1]: Started cri-containerd-80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd.scope - libcontainer container 80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd. Jul 2 00:24:51.089502 containerd[1446]: time="2024-07-02T00:24:51.089460343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xwxpv,Uid:fa81e257-5e09-45cc-8082-d337e7fa37d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd\"" Jul 2 00:24:51.527525 kubelet[2643]: I0702 00:24:51.527121 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mcn64" podStartSLOduration=3.527104144 podStartE2EDuration="3.527104144s" podCreationTimestamp="2024-07-02 00:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:51.525879168 +0000 UTC m=+16.433370498" watchObservedRunningTime="2024-07-02 00:24:51.527104144 +0000 UTC m=+16.434595454" Jul 2 00:24:57.738587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120954810.mount: Deactivated successfully. Jul 2 00:25:01.154075 containerd[1446]: time="2024-07-02T00:25:01.152021267Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735367" Jul 2 00:25:01.155013 containerd[1446]: time="2024-07-02T00:25:01.154981167Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.214589708s" Jul 2 00:25:01.155140 containerd[1446]: time="2024-07-02T00:25:01.155117320Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:25:01.157808 containerd[1446]: time="2024-07-02T00:25:01.157777686Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:01.160516 containerd[1446]: time="2024-07-02T00:25:01.160493410Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:01.171571 containerd[1446]: time="2024-07-02T00:25:01.171547808Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:25:01.173296 containerd[1446]: time="2024-07-02T00:25:01.173269804Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:25:01.257180 containerd[1446]: time="2024-07-02T00:25:01.257141597Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58\"" Jul 2 00:25:01.258298 containerd[1446]: time="2024-07-02T00:25:01.258268744Z" level=info msg="StartContainer for \"90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58\"" Jul 2 00:25:01.397622 systemd[1]: run-containerd-runc-k8s.io-90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58-runc.d26GWq.mount: Deactivated successfully. Jul 2 00:25:01.412240 systemd[1]: Started cri-containerd-90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58.scope - libcontainer container 90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58. Jul 2 00:25:01.444971 containerd[1446]: time="2024-07-02T00:25:01.444927564Z" level=info msg="StartContainer for \"90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58\" returns successfully" Jul 2 00:25:01.452822 systemd[1]: cri-containerd-90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58.scope: Deactivated successfully. Jul 2 00:25:01.822746 containerd[1446]: time="2024-07-02T00:25:01.756701787Z" level=info msg="shim disconnected" id=90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58 namespace=k8s.io Jul 2 00:25:01.822746 containerd[1446]: time="2024-07-02T00:25:01.822193702Z" level=warning msg="cleaning up after shim disconnected" id=90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58 namespace=k8s.io Jul 2 00:25:01.822746 containerd[1446]: time="2024-07-02T00:25:01.822234705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:02.252540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58-rootfs.mount: Deactivated successfully. Jul 2 00:25:02.553457 containerd[1446]: time="2024-07-02T00:25:02.552581496Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:25:02.604786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156910713.mount: Deactivated successfully. Jul 2 00:25:02.605421 containerd[1446]: time="2024-07-02T00:25:02.605224203Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5\"" Jul 2 00:25:02.610501 containerd[1446]: time="2024-07-02T00:25:02.608196806Z" level=info msg="StartContainer for \"c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5\"" Jul 2 00:25:02.656782 systemd[1]: Started cri-containerd-c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5.scope - libcontainer container c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5. Jul 2 00:25:02.711003 containerd[1446]: time="2024-07-02T00:25:02.710617287Z" level=info msg="StartContainer for \"c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5\" returns successfully" Jul 2 00:25:02.725199 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:25:02.725491 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:25:02.725553 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:25:02.732778 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:25:02.733251 systemd[1]: cri-containerd-c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5.scope: Deactivated successfully. Jul 2 00:25:02.815726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:25:02.826039 containerd[1446]: time="2024-07-02T00:25:02.825957363Z" level=info msg="shim disconnected" id=c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5 namespace=k8s.io Jul 2 00:25:02.826304 containerd[1446]: time="2024-07-02T00:25:02.826271212Z" level=warning msg="cleaning up after shim disconnected" id=c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5 namespace=k8s.io Jul 2 00:25:02.831343 containerd[1446]: time="2024-07-02T00:25:02.826436587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:03.248809 systemd[1]: run-containerd-runc-k8s.io-c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5-runc.iFwOgU.mount: Deactivated successfully. Jul 2 00:25:03.249520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5-rootfs.mount: Deactivated successfully. Jul 2 00:25:03.558331 containerd[1446]: time="2024-07-02T00:25:03.557593900Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:25:03.910453 containerd[1446]: time="2024-07-02T00:25:03.910359239Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a\"" Jul 2 00:25:03.911893 containerd[1446]: time="2024-07-02T00:25:03.911808541Z" level=info msg="StartContainer for \"f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a\"" Jul 2 00:25:04.016214 systemd[1]: Started cri-containerd-f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a.scope - libcontainer container f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a. Jul 2 00:25:04.056037 systemd[1]: cri-containerd-f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a.scope: Deactivated successfully. Jul 2 00:25:04.093313 containerd[1446]: time="2024-07-02T00:25:04.093263202Z" level=info msg="StartContainer for \"f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a\" returns successfully" Jul 2 00:25:04.153124 containerd[1446]: time="2024-07-02T00:25:04.152952789Z" level=info msg="shim disconnected" id=f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a namespace=k8s.io Jul 2 00:25:04.153124 containerd[1446]: time="2024-07-02T00:25:04.153115591Z" level=warning msg="cleaning up after shim disconnected" id=f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a namespace=k8s.io Jul 2 00:25:04.153124 containerd[1446]: time="2024-07-02T00:25:04.153127773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:04.211371 containerd[1446]: time="2024-07-02T00:25:04.211208587Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:04.213808 containerd[1446]: time="2024-07-02T00:25:04.213327575Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907229" Jul 2 00:25:04.215346 containerd[1446]: time="2024-07-02T00:25:04.215279584Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:04.217825 containerd[1446]: time="2024-07-02T00:25:04.217712184Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.046032521s" Jul 2 00:25:04.217891 containerd[1446]: time="2024-07-02T00:25:04.217839563Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:25:04.222350 containerd[1446]: time="2024-07-02T00:25:04.222181977Z" level=info msg="CreateContainer within sandbox \"80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:25:04.248979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a-rootfs.mount: Deactivated successfully. Jul 2 00:25:04.259084 containerd[1446]: time="2024-07-02T00:25:04.258956868Z" level=info msg="CreateContainer within sandbox \"80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\"" Jul 2 00:25:04.261905 containerd[1446]: time="2024-07-02T00:25:04.260653520Z" level=info msg="StartContainer for \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\"" Jul 2 00:25:04.304266 systemd[1]: Started cri-containerd-8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d.scope - libcontainer container 8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d. Jul 2 00:25:04.358983 containerd[1446]: time="2024-07-02T00:25:04.358901775Z" level=info msg="StartContainer for \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\" returns successfully" Jul 2 00:25:04.568770 containerd[1446]: time="2024-07-02T00:25:04.568601476Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:25:04.614449 containerd[1446]: time="2024-07-02T00:25:04.614393220Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114\"" Jul 2 00:25:04.616090 containerd[1446]: time="2024-07-02T00:25:04.615212339Z" level=info msg="StartContainer for \"e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114\"" Jul 2 00:25:04.660458 systemd[1]: Started cri-containerd-e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114.scope - libcontainer container e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114. Jul 2 00:25:04.728902 systemd[1]: cri-containerd-e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114.scope: Deactivated successfully. Jul 2 00:25:04.733321 containerd[1446]: time="2024-07-02T00:25:04.733008756Z" level=info msg="StartContainer for \"e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114\" returns successfully" Jul 2 00:25:04.772320 kubelet[2643]: I0702 00:25:04.772181 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xwxpv" podStartSLOduration=2.64408217 podStartE2EDuration="15.772159185s" podCreationTimestamp="2024-07-02 00:24:49 +0000 UTC" firstStartedPulling="2024-07-02 00:24:51.090832005 +0000 UTC m=+15.998323315" lastFinishedPulling="2024-07-02 00:25:04.21890898 +0000 UTC m=+29.126400330" observedRunningTime="2024-07-02 00:25:04.685460943 +0000 UTC m=+29.592952253" watchObservedRunningTime="2024-07-02 00:25:04.772159185 +0000 UTC m=+29.679650485" Jul 2 00:25:04.775513 containerd[1446]: time="2024-07-02T00:25:04.775395698Z" level=info msg="shim disconnected" id=e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114 namespace=k8s.io Jul 2 00:25:04.775610 containerd[1446]: time="2024-07-02T00:25:04.775542189Z" level=warning msg="cleaning up after shim disconnected" id=e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114 namespace=k8s.io Jul 2 00:25:04.775610 containerd[1446]: time="2024-07-02T00:25:04.775557197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:05.572040 containerd[1446]: time="2024-07-02T00:25:05.571649296Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:25:05.710043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286536461.mount: Deactivated successfully. Jul 2 00:25:05.753285 containerd[1446]: time="2024-07-02T00:25:05.753214749Z" level=info msg="CreateContainer within sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\"" Jul 2 00:25:05.755180 containerd[1446]: time="2024-07-02T00:25:05.755135619Z" level=info msg="StartContainer for \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\"" Jul 2 00:25:05.805261 systemd[1]: Started cri-containerd-af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b.scope - libcontainer container af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b. Jul 2 00:25:06.088533 containerd[1446]: time="2024-07-02T00:25:06.088013543Z" level=info msg="StartContainer for \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\" returns successfully" Jul 2 00:25:06.252341 systemd[1]: run-containerd-runc-k8s.io-af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b-runc.133Hb8.mount: Deactivated successfully. Jul 2 00:25:06.421081 kubelet[2643]: I0702 00:25:06.420755 2643 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:25:06.461539 kubelet[2643]: I0702 00:25:06.461185 2643 topology_manager.go:215] "Topology Admit Handler" podUID="dff67a27-f4dd-4eeb-9976-c78a8785cd57" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qcz6x" Jul 2 00:25:06.467807 kubelet[2643]: I0702 00:25:06.467488 2643 topology_manager.go:215] "Topology Admit Handler" podUID="2a39ff84-d35f-4c60-bb00-80c8f5aaf18b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hbfpg" Jul 2 00:25:06.476788 systemd[1]: Created slice kubepods-burstable-poddff67a27_f4dd_4eeb_9976_c78a8785cd57.slice - libcontainer container kubepods-burstable-poddff67a27_f4dd_4eeb_9976_c78a8785cd57.slice. Jul 2 00:25:06.486032 systemd[1]: Created slice kubepods-burstable-pod2a39ff84_d35f_4c60_bb00_80c8f5aaf18b.slice - libcontainer container kubepods-burstable-pod2a39ff84_d35f_4c60_bb00_80c8f5aaf18b.slice. Jul 2 00:25:06.580180 kubelet[2643]: I0702 00:25:06.579291 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s4xc\" (UniqueName: \"kubernetes.io/projected/dff67a27-f4dd-4eeb-9976-c78a8785cd57-kube-api-access-5s4xc\") pod \"coredns-7db6d8ff4d-qcz6x\" (UID: \"dff67a27-f4dd-4eeb-9976-c78a8785cd57\") " pod="kube-system/coredns-7db6d8ff4d-qcz6x" Jul 2 00:25:06.580180 kubelet[2643]: I0702 00:25:06.579332 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dff67a27-f4dd-4eeb-9976-c78a8785cd57-config-volume\") pod \"coredns-7db6d8ff4d-qcz6x\" (UID: \"dff67a27-f4dd-4eeb-9976-c78a8785cd57\") " pod="kube-system/coredns-7db6d8ff4d-qcz6x" Jul 2 00:25:06.580180 kubelet[2643]: I0702 00:25:06.579356 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a39ff84-d35f-4c60-bb00-80c8f5aaf18b-config-volume\") pod \"coredns-7db6d8ff4d-hbfpg\" (UID: \"2a39ff84-d35f-4c60-bb00-80c8f5aaf18b\") " pod="kube-system/coredns-7db6d8ff4d-hbfpg" Jul 2 00:25:06.580180 kubelet[2643]: I0702 00:25:06.579376 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44wlt\" (UniqueName: \"kubernetes.io/projected/2a39ff84-d35f-4c60-bb00-80c8f5aaf18b-kube-api-access-44wlt\") pod \"coredns-7db6d8ff4d-hbfpg\" (UID: \"2a39ff84-d35f-4c60-bb00-80c8f5aaf18b\") " pod="kube-system/coredns-7db6d8ff4d-hbfpg" Jul 2 00:25:07.157823 containerd[1446]: time="2024-07-02T00:25:07.157678821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qcz6x,Uid:dff67a27-f4dd-4eeb-9976-c78a8785cd57,Namespace:kube-system,Attempt:0,}" Jul 2 00:25:07.161238 containerd[1446]: time="2024-07-02T00:25:07.161140125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hbfpg,Uid:2a39ff84-d35f-4c60-bb00-80c8f5aaf18b,Namespace:kube-system,Attempt:0,}" Jul 2 00:25:08.674332 systemd-networkd[1370]: cilium_host: Link UP Jul 2 00:25:08.679732 systemd-networkd[1370]: cilium_net: Link UP Jul 2 00:25:08.680524 systemd-networkd[1370]: cilium_net: Gained carrier Jul 2 00:25:08.681036 systemd-networkd[1370]: cilium_host: Gained carrier Jul 2 00:25:08.883672 systemd-networkd[1370]: cilium_vxlan: Link UP Jul 2 00:25:08.883685 systemd-networkd[1370]: cilium_vxlan: Gained carrier Jul 2 00:25:09.502327 systemd-networkd[1370]: cilium_net: Gained IPv6LL Jul 2 00:25:09.694369 systemd-networkd[1370]: cilium_host: Gained IPv6LL Jul 2 00:25:10.092592 kernel: NET: Registered PF_ALG protocol family Jul 2 00:25:10.847296 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Jul 2 00:25:11.145583 systemd-networkd[1370]: lxc_health: Link UP Jul 2 00:25:11.150683 systemd-networkd[1370]: lxc_health: Gained carrier Jul 2 00:25:11.605110 kernel: eth0: renamed from tmp1cf96 Jul 2 00:25:11.611152 kernel: eth0: renamed from tmpfebbd Jul 2 00:25:11.621395 systemd-networkd[1370]: lxcc6e69ff32bd7: Link UP Jul 2 00:25:11.626129 systemd-networkd[1370]: lxc062fa59dde4d: Link UP Jul 2 00:25:11.628020 systemd-networkd[1370]: lxcc6e69ff32bd7: Gained carrier Jul 2 00:25:11.628228 systemd-networkd[1370]: lxc062fa59dde4d: Gained carrier Jul 2 00:25:12.190871 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jul 2 00:25:12.702294 systemd-networkd[1370]: lxc062fa59dde4d: Gained IPv6LL Jul 2 00:25:12.703369 systemd-networkd[1370]: lxcc6e69ff32bd7: Gained IPv6LL Jul 2 00:25:12.810684 kubelet[2643]: I0702 00:25:12.810556 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vf8rp" podStartSLOduration=14.579550371 podStartE2EDuration="24.810540135s" podCreationTimestamp="2024-07-02 00:24:48 +0000 UTC" firstStartedPulling="2024-07-02 00:24:50.938831521 +0000 UTC m=+15.846322831" lastFinishedPulling="2024-07-02 00:25:01.169821245 +0000 UTC m=+26.077312595" observedRunningTime="2024-07-02 00:25:06.596703991 +0000 UTC m=+31.504195321" watchObservedRunningTime="2024-07-02 00:25:12.810540135 +0000 UTC m=+37.718031445" Jul 2 00:25:16.239098 containerd[1446]: time="2024-07-02T00:25:16.238863597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:16.239863 containerd[1446]: time="2024-07-02T00:25:16.239222244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:16.239863 containerd[1446]: time="2024-07-02T00:25:16.239396953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:16.239863 containerd[1446]: time="2024-07-02T00:25:16.239558289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:16.280251 systemd[1]: Started cri-containerd-1cf966f1d47679011370f6aaf98c9d7e0fa9d8fbbf11a996935eb46f59a39031.scope - libcontainer container 1cf966f1d47679011370f6aaf98c9d7e0fa9d8fbbf11a996935eb46f59a39031. Jul 2 00:25:16.305767 containerd[1446]: time="2024-07-02T00:25:16.305205520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:16.305767 containerd[1446]: time="2024-07-02T00:25:16.305375892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:16.305767 containerd[1446]: time="2024-07-02T00:25:16.305504908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:16.305767 containerd[1446]: time="2024-07-02T00:25:16.305523382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:16.343512 systemd[1]: Started cri-containerd-febbdb61344db49f478534363e85fcd0118722e5f5a8ae1a091217b3ccebdfae.scope - libcontainer container febbdb61344db49f478534363e85fcd0118722e5f5a8ae1a091217b3ccebdfae. Jul 2 00:25:16.420380 containerd[1446]: time="2024-07-02T00:25:16.420320922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qcz6x,Uid:dff67a27-f4dd-4eeb-9976-c78a8785cd57,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cf966f1d47679011370f6aaf98c9d7e0fa9d8fbbf11a996935eb46f59a39031\"" Jul 2 00:25:16.432168 containerd[1446]: time="2024-07-02T00:25:16.432132190Z" level=info msg="CreateContainer within sandbox \"1cf966f1d47679011370f6aaf98c9d7e0fa9d8fbbf11a996935eb46f59a39031\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:25:16.437286 containerd[1446]: time="2024-07-02T00:25:16.437231882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hbfpg,Uid:2a39ff84-d35f-4c60-bb00-80c8f5aaf18b,Namespace:kube-system,Attempt:0,} returns sandbox id \"febbdb61344db49f478534363e85fcd0118722e5f5a8ae1a091217b3ccebdfae\"" Jul 2 00:25:16.442350 containerd[1446]: time="2024-07-02T00:25:16.442040531Z" level=info msg="CreateContainer within sandbox \"febbdb61344db49f478534363e85fcd0118722e5f5a8ae1a091217b3ccebdfae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:25:16.481586 containerd[1446]: time="2024-07-02T00:25:16.481526406Z" level=info msg="CreateContainer within sandbox \"febbdb61344db49f478534363e85fcd0118722e5f5a8ae1a091217b3ccebdfae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb96200af395d0149ef13ab9ba856e29e245d52a0706da97da988bbc5e7b8345\"" Jul 2 00:25:16.483615 containerd[1446]: time="2024-07-02T00:25:16.483525198Z" level=info msg="StartContainer for \"eb96200af395d0149ef13ab9ba856e29e245d52a0706da97da988bbc5e7b8345\"" Jul 2 00:25:16.492938 containerd[1446]: time="2024-07-02T00:25:16.492798466Z" level=info msg="CreateContainer within sandbox \"1cf966f1d47679011370f6aaf98c9d7e0fa9d8fbbf11a996935eb46f59a39031\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e02f02ef97afd37e5b48de2af1beda6fe7cc36d1cda5ce4426af1d85895f67c1\"" Jul 2 00:25:16.495977 containerd[1446]: time="2024-07-02T00:25:16.495641804Z" level=info msg="StartContainer for \"e02f02ef97afd37e5b48de2af1beda6fe7cc36d1cda5ce4426af1d85895f67c1\"" Jul 2 00:25:16.521944 systemd[1]: Started cri-containerd-eb96200af395d0149ef13ab9ba856e29e245d52a0706da97da988bbc5e7b8345.scope - libcontainer container eb96200af395d0149ef13ab9ba856e29e245d52a0706da97da988bbc5e7b8345. Jul 2 00:25:16.539553 systemd[1]: Started cri-containerd-e02f02ef97afd37e5b48de2af1beda6fe7cc36d1cda5ce4426af1d85895f67c1.scope - libcontainer container e02f02ef97afd37e5b48de2af1beda6fe7cc36d1cda5ce4426af1d85895f67c1. Jul 2 00:25:16.603782 containerd[1446]: time="2024-07-02T00:25:16.603747536Z" level=info msg="StartContainer for \"e02f02ef97afd37e5b48de2af1beda6fe7cc36d1cda5ce4426af1d85895f67c1\" returns successfully" Jul 2 00:25:16.604103 containerd[1446]: time="2024-07-02T00:25:16.603988477Z" level=info msg="StartContainer for \"eb96200af395d0149ef13ab9ba856e29e245d52a0706da97da988bbc5e7b8345\" returns successfully" Jul 2 00:25:16.639006 kubelet[2643]: I0702 00:25:16.638447 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hbfpg" podStartSLOduration=27.638427797 podStartE2EDuration="27.638427797s" podCreationTimestamp="2024-07-02 00:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:16.637521107 +0000 UTC m=+41.545012417" watchObservedRunningTime="2024-07-02 00:25:16.638427797 +0000 UTC m=+41.545919097" Jul 2 00:25:17.264339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476345206.mount: Deactivated successfully. Jul 2 00:25:27.117126 kubelet[2643]: I0702 00:25:27.116412 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qcz6x" podStartSLOduration=38.116377802 podStartE2EDuration="38.116377802s" podCreationTimestamp="2024-07-02 00:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:16.666920744 +0000 UTC m=+41.574412054" watchObservedRunningTime="2024-07-02 00:25:27.116377802 +0000 UTC m=+52.023869153" Jul 2 00:25:41.775909 systemd[1]: Started sshd@9-172.24.4.162:22-172.24.4.1:50388.service - OpenSSH per-connection server daemon (172.24.4.1:50388). Jul 2 00:25:43.308434 sshd[4016]: Accepted publickey for core from 172.24.4.1 port 50388 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:25:43.311704 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:43.323754 systemd-logind[1429]: New session 12 of user core. Jul 2 00:25:43.331384 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:25:44.759215 sshd[4016]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:44.764285 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:25:44.765759 systemd[1]: sshd@9-172.24.4.162:22-172.24.4.1:50388.service: Deactivated successfully. Jul 2 00:25:44.769365 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:25:44.771793 systemd-logind[1429]: Removed session 12. Jul 2 00:25:49.782041 systemd[1]: Started sshd@10-172.24.4.162:22-172.24.4.1:40904.service - OpenSSH per-connection server daemon (172.24.4.1:40904). Jul 2 00:25:51.109491 sshd[4030]: Accepted publickey for core from 172.24.4.1 port 40904 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:25:51.112459 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:51.122456 systemd-logind[1429]: New session 13 of user core. Jul 2 00:25:51.130981 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:25:52.060825 sshd[4030]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:52.071142 systemd[1]: sshd@10-172.24.4.162:22-172.24.4.1:40904.service: Deactivated successfully. Jul 2 00:25:52.078715 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:25:52.083726 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:25:52.086419 systemd-logind[1429]: Removed session 13. Jul 2 00:25:57.088139 systemd[1]: Started sshd@11-172.24.4.162:22-172.24.4.1:36966.service - OpenSSH per-connection server daemon (172.24.4.1:36966). Jul 2 00:25:58.719137 sshd[4046]: Accepted publickey for core from 172.24.4.1 port 36966 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:25:58.721960 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:25:58.735189 systemd-logind[1429]: New session 14 of user core. Jul 2 00:25:58.751484 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:25:59.518851 sshd[4046]: pam_unix(sshd:session): session closed for user core Jul 2 00:25:59.530390 systemd[1]: sshd@11-172.24.4.162:22-172.24.4.1:36966.service: Deactivated successfully. Jul 2 00:25:59.533995 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:25:59.537857 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:25:59.546861 systemd[1]: Started sshd@12-172.24.4.162:22-172.24.4.1:36982.service - OpenSSH per-connection server daemon (172.24.4.1:36982). Jul 2 00:25:59.549956 systemd-logind[1429]: Removed session 14. Jul 2 00:26:01.013126 sshd[4060]: Accepted publickey for core from 172.24.4.1 port 36982 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:01.016630 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:01.029501 systemd-logind[1429]: New session 15 of user core. Jul 2 00:26:01.038451 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:26:02.098348 sshd[4060]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:02.110558 systemd[1]: sshd@12-172.24.4.162:22-172.24.4.1:36982.service: Deactivated successfully. Jul 2 00:26:02.117361 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:26:02.119757 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:26:02.129842 systemd[1]: Started sshd@13-172.24.4.162:22-172.24.4.1:36994.service - OpenSSH per-connection server daemon (172.24.4.1:36994). Jul 2 00:26:02.136677 systemd-logind[1429]: Removed session 15. Jul 2 00:26:03.335623 sshd[4071]: Accepted publickey for core from 172.24.4.1 port 36994 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:03.337713 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:03.345400 systemd-logind[1429]: New session 16 of user core. Jul 2 00:26:03.352276 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:26:04.064700 sshd[4071]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:04.069814 systemd[1]: sshd@13-172.24.4.162:22-172.24.4.1:36994.service: Deactivated successfully. Jul 2 00:26:04.072171 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:26:04.073910 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:26:04.076120 systemd-logind[1429]: Removed session 16. Jul 2 00:26:09.097913 systemd[1]: Started sshd@14-172.24.4.162:22-172.24.4.1:35826.service - OpenSSH per-connection server daemon (172.24.4.1:35826). Jul 2 00:26:10.440838 sshd[4085]: Accepted publickey for core from 172.24.4.1 port 35826 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:10.442942 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:10.451794 systemd-logind[1429]: New session 17 of user core. Jul 2 00:26:10.460379 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:26:11.388010 sshd[4085]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:11.392304 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:26:11.393604 systemd[1]: sshd@14-172.24.4.162:22-172.24.4.1:35826.service: Deactivated successfully. Jul 2 00:26:11.396158 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:26:11.398986 systemd-logind[1429]: Removed session 17. Jul 2 00:26:16.408605 systemd[1]: Started sshd@15-172.24.4.162:22-172.24.4.1:46924.service - OpenSSH per-connection server daemon (172.24.4.1:46924). Jul 2 00:26:17.739127 sshd[4098]: Accepted publickey for core from 172.24.4.1 port 46924 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:17.742342 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:17.755182 systemd-logind[1429]: New session 18 of user core. Jul 2 00:26:17.762437 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:26:18.634439 sshd[4098]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:18.645526 systemd[1]: sshd@15-172.24.4.162:22-172.24.4.1:46924.service: Deactivated successfully. Jul 2 00:26:18.648554 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:26:18.650751 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:26:18.656623 systemd[1]: Started sshd@16-172.24.4.162:22-172.24.4.1:46928.service - OpenSSH per-connection server daemon (172.24.4.1:46928). Jul 2 00:26:18.659754 systemd-logind[1429]: Removed session 18. Jul 2 00:26:19.963617 sshd[4111]: Accepted publickey for core from 172.24.4.1 port 46928 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:19.966415 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:19.976814 systemd-logind[1429]: New session 19 of user core. Jul 2 00:26:19.983353 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:26:21.337847 sshd[4111]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:21.351549 systemd[1]: sshd@16-172.24.4.162:22-172.24.4.1:46928.service: Deactivated successfully. Jul 2 00:26:21.355565 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:26:21.357908 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:26:21.365748 systemd[1]: Started sshd@17-172.24.4.162:22-172.24.4.1:46934.service - OpenSSH per-connection server daemon (172.24.4.1:46934). Jul 2 00:26:21.369840 systemd-logind[1429]: Removed session 19. Jul 2 00:26:22.738795 sshd[4124]: Accepted publickey for core from 172.24.4.1 port 46934 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:22.741107 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:22.752367 systemd-logind[1429]: New session 20 of user core. Jul 2 00:26:22.759366 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:26:25.485980 sshd[4124]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:25.498231 systemd[1]: sshd@17-172.24.4.162:22-172.24.4.1:46934.service: Deactivated successfully. Jul 2 00:26:25.501685 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:26:25.503429 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:26:25.512911 systemd[1]: Started sshd@18-172.24.4.162:22-172.24.4.1:55514.service - OpenSSH per-connection server daemon (172.24.4.1:55514). Jul 2 00:26:25.517302 systemd-logind[1429]: Removed session 20. Jul 2 00:26:26.991337 sshd[4141]: Accepted publickey for core from 172.24.4.1 port 55514 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:26.994280 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:27.006394 systemd-logind[1429]: New session 21 of user core. Jul 2 00:26:27.016599 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:26:28.685207 sshd[4141]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:28.692761 systemd[1]: sshd@18-172.24.4.162:22-172.24.4.1:55514.service: Deactivated successfully. Jul 2 00:26:28.695345 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:26:28.698323 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:26:28.706993 systemd[1]: Started sshd@19-172.24.4.162:22-172.24.4.1:55518.service - OpenSSH per-connection server daemon (172.24.4.1:55518). Jul 2 00:26:28.711952 systemd-logind[1429]: Removed session 21. Jul 2 00:26:30.244124 sshd[4152]: Accepted publickey for core from 172.24.4.1 port 55518 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:30.247337 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:30.252198 systemd-logind[1429]: New session 22 of user core. Jul 2 00:26:30.260456 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:26:31.287449 sshd[4152]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:31.291950 systemd[1]: sshd@19-172.24.4.162:22-172.24.4.1:55518.service: Deactivated successfully. Jul 2 00:26:31.297252 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:26:31.300310 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:26:31.302762 systemd-logind[1429]: Removed session 22. Jul 2 00:26:36.310891 systemd[1]: Started sshd@20-172.24.4.162:22-172.24.4.1:48610.service - OpenSSH per-connection server daemon (172.24.4.1:48610). Jul 2 00:26:37.763207 sshd[4170]: Accepted publickey for core from 172.24.4.1 port 48610 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:37.765573 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:37.777493 systemd-logind[1429]: New session 23 of user core. Jul 2 00:26:37.783352 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:26:38.538925 sshd[4170]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:38.544526 systemd[1]: sshd@20-172.24.4.162:22-172.24.4.1:48610.service: Deactivated successfully. Jul 2 00:26:38.549252 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:26:38.552610 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:26:38.555628 systemd-logind[1429]: Removed session 23. Jul 2 00:26:43.563683 systemd[1]: Started sshd@21-172.24.4.162:22-172.24.4.1:48622.service - OpenSSH per-connection server daemon (172.24.4.1:48622). Jul 2 00:26:44.963516 sshd[4183]: Accepted publickey for core from 172.24.4.1 port 48622 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:44.967009 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:44.979041 systemd-logind[1429]: New session 24 of user core. Jul 2 00:26:44.988434 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:26:45.624563 sshd[4183]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:45.636110 systemd[1]: sshd@21-172.24.4.162:22-172.24.4.1:48622.service: Deactivated successfully. Jul 2 00:26:45.641265 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:26:45.643824 systemd-logind[1429]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:26:45.652676 systemd[1]: Started sshd@22-172.24.4.162:22-172.24.4.1:59290.service - OpenSSH per-connection server daemon (172.24.4.1:59290). Jul 2 00:26:45.655736 systemd-logind[1429]: Removed session 24. Jul 2 00:26:46.985480 sshd[4196]: Accepted publickey for core from 172.24.4.1 port 59290 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:46.987179 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:46.992493 systemd-logind[1429]: New session 25 of user core. Jul 2 00:26:47.001320 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:26:49.716134 containerd[1446]: time="2024-07-02T00:26:49.716072190Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:26:49.730563 containerd[1446]: time="2024-07-02T00:26:49.730501835Z" level=info msg="StopContainer for \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\" with timeout 2 (s)" Jul 2 00:26:49.730939 containerd[1446]: time="2024-07-02T00:26:49.730891521Z" level=info msg="StopContainer for \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\" with timeout 30 (s)" Jul 2 00:26:49.739939 containerd[1446]: time="2024-07-02T00:26:49.739908785Z" level=info msg="Stop container \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\" with signal terminated" Jul 2 00:26:49.743081 containerd[1446]: time="2024-07-02T00:26:49.741699900Z" level=info msg="Stop container \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\" with signal terminated" Jul 2 00:26:49.761391 systemd[1]: cri-containerd-8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d.scope: Deactivated successfully. Jul 2 00:26:49.766039 systemd-networkd[1370]: lxc_health: Link DOWN Jul 2 00:26:49.766082 systemd-networkd[1370]: lxc_health: Lost carrier Jul 2 00:26:49.784486 systemd[1]: cri-containerd-af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b.scope: Deactivated successfully. Jul 2 00:26:49.785007 systemd[1]: cri-containerd-af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b.scope: Consumed 8.956s CPU time. Jul 2 00:26:49.810029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d-rootfs.mount: Deactivated successfully. Jul 2 00:26:49.821533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b-rootfs.mount: Deactivated successfully. Jul 2 00:26:49.827238 containerd[1446]: time="2024-07-02T00:26:49.827128411Z" level=info msg="shim disconnected" id=8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d namespace=k8s.io Jul 2 00:26:49.827407 containerd[1446]: time="2024-07-02T00:26:49.827225236Z" level=warning msg="cleaning up after shim disconnected" id=8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d namespace=k8s.io Jul 2 00:26:49.827445 containerd[1446]: time="2024-07-02T00:26:49.827359312Z" level=info msg="shim disconnected" id=af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b namespace=k8s.io Jul 2 00:26:49.827484 containerd[1446]: time="2024-07-02T00:26:49.827450827Z" level=warning msg="cleaning up after shim disconnected" id=af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b namespace=k8s.io Jul 2 00:26:49.827484 containerd[1446]: time="2024-07-02T00:26:49.827461107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:49.830084 containerd[1446]: time="2024-07-02T00:26:49.827603479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:49.869740 containerd[1446]: time="2024-07-02T00:26:49.868374668Z" level=info msg="StopContainer for \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\" returns successfully" Jul 2 00:26:49.870099 containerd[1446]: time="2024-07-02T00:26:49.870032208Z" level=info msg="StopContainer for \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\" returns successfully" Jul 2 00:26:49.870752 containerd[1446]: time="2024-07-02T00:26:49.870714523Z" level=info msg="StopPodSandbox for \"80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd\"" Jul 2 00:26:49.870946 containerd[1446]: time="2024-07-02T00:26:49.870917170Z" level=info msg="StopPodSandbox for \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\"" Jul 2 00:26:49.872416 containerd[1446]: time="2024-07-02T00:26:49.870980030Z" level=info msg="Container to stop \"90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:49.872416 containerd[1446]: time="2024-07-02T00:26:49.872408943Z" level=info msg="Container to stop \"f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:49.872517 containerd[1446]: time="2024-07-02T00:26:49.872425195Z" level=info msg="Container to stop \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:49.872517 containerd[1446]: time="2024-07-02T00:26:49.872437999Z" level=info msg="Container to stop \"c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:49.872517 containerd[1446]: time="2024-07-02T00:26:49.872449541Z" level=info msg="Container to stop \"e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:49.875784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324-shm.mount: Deactivated successfully. Jul 2 00:26:49.880091 containerd[1446]: time="2024-07-02T00:26:49.870772634Z" level=info msg="Container to stop \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:26:49.881192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd-shm.mount: Deactivated successfully. Jul 2 00:26:49.889667 systemd[1]: cri-containerd-754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324.scope: Deactivated successfully. Jul 2 00:26:49.890759 systemd[1]: cri-containerd-80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd.scope: Deactivated successfully. Jul 2 00:26:49.946357 containerd[1446]: time="2024-07-02T00:26:49.946295792Z" level=info msg="shim disconnected" id=80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd namespace=k8s.io Jul 2 00:26:49.946357 containerd[1446]: time="2024-07-02T00:26:49.946348773Z" level=warning msg="cleaning up after shim disconnected" id=80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd namespace=k8s.io Jul 2 00:26:49.946357 containerd[1446]: time="2024-07-02T00:26:49.946358802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:49.947139 containerd[1446]: time="2024-07-02T00:26:49.946973989Z" level=info msg="shim disconnected" id=754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324 namespace=k8s.io Jul 2 00:26:49.947191 containerd[1446]: time="2024-07-02T00:26:49.947137882Z" level=warning msg="cleaning up after shim disconnected" id=754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324 namespace=k8s.io Jul 2 00:26:49.947191 containerd[1446]: time="2024-07-02T00:26:49.947149535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:49.966337 containerd[1446]: time="2024-07-02T00:26:49.966210096Z" level=info msg="TearDown network for sandbox \"80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd\" successfully" Jul 2 00:26:49.966337 containerd[1446]: time="2024-07-02T00:26:49.966247346Z" level=info msg="StopPodSandbox for \"80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd\" returns successfully" Jul 2 00:26:49.967750 containerd[1446]: time="2024-07-02T00:26:49.967708992Z" level=info msg="TearDown network for sandbox \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" successfully" Jul 2 00:26:49.967750 containerd[1446]: time="2024-07-02T00:26:49.967735452Z" level=info msg="StopPodSandbox for \"754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324\" returns successfully" Jul 2 00:26:50.125382 kubelet[2643]: I0702 00:26:50.125323 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.125382 kubelet[2643]: I0702 00:26:50.125332 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-run\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.126144 kubelet[2643]: I0702 00:26:50.125404 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-hostproc\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.126144 kubelet[2643]: I0702 00:26:50.125430 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-host-proc-sys-kernel\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.126144 kubelet[2643]: I0702 00:26:50.125448 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-etc-cni-netd\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.126144 kubelet[2643]: I0702 00:26:50.125472 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-hubble-tls\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.126144 kubelet[2643]: I0702 00:26:50.125489 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-bpf-maps\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.126144 kubelet[2643]: I0702 00:26:50.125506 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cni-path\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.127591 kubelet[2643]: I0702 00:26:50.125539 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdtrt\" (UniqueName: \"kubernetes.io/projected/fa81e257-5e09-45cc-8082-d337e7fa37d9-kube-api-access-zdtrt\") pod \"fa81e257-5e09-45cc-8082-d337e7fa37d9\" (UID: \"fa81e257-5e09-45cc-8082-d337e7fa37d9\") " Jul 2 00:26:50.127591 kubelet[2643]: I0702 00:26:50.125574 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-xtables-lock\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.127591 kubelet[2643]: I0702 00:26:50.125608 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-lib-modules\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.127591 kubelet[2643]: I0702 00:26:50.125649 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa81e257-5e09-45cc-8082-d337e7fa37d9-cilium-config-path\") pod \"fa81e257-5e09-45cc-8082-d337e7fa37d9\" (UID: \"fa81e257-5e09-45cc-8082-d337e7fa37d9\") " Jul 2 00:26:50.127591 kubelet[2643]: I0702 00:26:50.125690 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgn5l\" (UniqueName: \"kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-kube-api-access-wgn5l\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.127591 kubelet[2643]: I0702 00:26:50.125752 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-config-path\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.127937 kubelet[2643]: I0702 00:26:50.125789 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-cgroup\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.127937 kubelet[2643]: I0702 00:26:50.125826 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-host-proc-sys-net\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.127937 kubelet[2643]: I0702 00:26:50.125872 2643 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb0fbaca-7023-468b-ab48-81ede1d0801c-clustermesh-secrets\") pod \"cb0fbaca-7023-468b-ab48-81ede1d0801c\" (UID: \"cb0fbaca-7023-468b-ab48-81ede1d0801c\") " Jul 2 00:26:50.127937 kubelet[2643]: I0702 00:26:50.125934 2643 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-run\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.130099 kubelet[2643]: I0702 00:26:50.129358 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.130099 kubelet[2643]: I0702 00:26:50.129418 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.130099 kubelet[2643]: I0702 00:26:50.129474 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-hostproc" (OuterVolumeSpecName: "hostproc") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.130099 kubelet[2643]: I0702 00:26:50.129530 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.130099 kubelet[2643]: I0702 00:26:50.129563 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.132255 kubelet[2643]: I0702 00:26:50.132169 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.132579 kubelet[2643]: I0702 00:26:50.132503 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cni-path" (OuterVolumeSpecName: "cni-path") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.143284 kubelet[2643]: I0702 00:26:50.143172 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.147110 kubelet[2643]: I0702 00:26:50.145497 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:26:50.159869 kubelet[2643]: I0702 00:26:50.159784 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:26:50.161115 kubelet[2643]: I0702 00:26:50.161031 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:26:50.161547 kubelet[2643]: I0702 00:26:50.161453 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa81e257-5e09-45cc-8082-d337e7fa37d9-kube-api-access-zdtrt" (OuterVolumeSpecName: "kube-api-access-zdtrt") pod "fa81e257-5e09-45cc-8082-d337e7fa37d9" (UID: "fa81e257-5e09-45cc-8082-d337e7fa37d9"). InnerVolumeSpecName "kube-api-access-zdtrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:26:50.162449 kubelet[2643]: I0702 00:26:50.162345 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb0fbaca-7023-468b-ab48-81ede1d0801c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:26:50.168470 kubelet[2643]: I0702 00:26:50.168324 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa81e257-5e09-45cc-8082-d337e7fa37d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fa81e257-5e09-45cc-8082-d337e7fa37d9" (UID: "fa81e257-5e09-45cc-8082-d337e7fa37d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:26:50.177322 kubelet[2643]: I0702 00:26:50.168913 2643 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-kube-api-access-wgn5l" (OuterVolumeSpecName: "kube-api-access-wgn5l") pod "cb0fbaca-7023-468b-ab48-81ede1d0801c" (UID: "cb0fbaca-7023-468b-ab48-81ede1d0801c"). InnerVolumeSpecName "kube-api-access-wgn5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:26:50.227287 kubelet[2643]: I0702 00:26:50.227134 2643 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-host-proc-sys-net\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.229573 kubelet[2643]: I0702 00:26:50.229154 2643 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb0fbaca-7023-468b-ab48-81ede1d0801c-clustermesh-secrets\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.229573 kubelet[2643]: I0702 00:26:50.229206 2643 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-hostproc\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.229573 kubelet[2643]: I0702 00:26:50.229233 2643 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-host-proc-sys-kernel\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.229573 kubelet[2643]: I0702 00:26:50.229258 2643 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-etc-cni-netd\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.229573 kubelet[2643]: I0702 00:26:50.229324 2643 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-hubble-tls\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.229573 kubelet[2643]: I0702 00:26:50.229347 2643 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-bpf-maps\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.229573 kubelet[2643]: I0702 00:26:50.229370 2643 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cni-path\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.230148 kubelet[2643]: I0702 00:26:50.229393 2643 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zdtrt\" (UniqueName: \"kubernetes.io/projected/fa81e257-5e09-45cc-8082-d337e7fa37d9-kube-api-access-zdtrt\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.230148 kubelet[2643]: I0702 00:26:50.229415 2643 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-xtables-lock\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.230148 kubelet[2643]: I0702 00:26:50.229437 2643 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-lib-modules\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.230148 kubelet[2643]: I0702 00:26:50.229460 2643 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa81e257-5e09-45cc-8082-d337e7fa37d9-cilium-config-path\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.230148 kubelet[2643]: I0702 00:26:50.229484 2643 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wgn5l\" (UniqueName: \"kubernetes.io/projected/cb0fbaca-7023-468b-ab48-81ede1d0801c-kube-api-access-wgn5l\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.230148 kubelet[2643]: I0702 00:26:50.229507 2643 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-config-path\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.230148 kubelet[2643]: I0702 00:26:50.229534 2643 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb0fbaca-7023-468b-ab48-81ede1d0801c-cilium-cgroup\") on node \"ci-3975-1-1-5-578c77618a.novalocal\" DevicePath \"\"" Jul 2 00:26:50.552700 kubelet[2643]: E0702 00:26:50.552468 2643 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:26:50.605596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80d91c1dd3606815c63c3bd11ac2b6c1ae726edabff2ccc009c812334d73c6cd-rootfs.mount: Deactivated successfully. Jul 2 00:26:50.605844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-754c5aa07559e8149b3974c6055fdcc2f487b689f096c869fec4f01d934ba324-rootfs.mount: Deactivated successfully. Jul 2 00:26:50.606023 systemd[1]: var-lib-kubelet-pods-fa81e257\x2d5e09\x2d45cc\x2d8082\x2dd337e7fa37d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzdtrt.mount: Deactivated successfully. Jul 2 00:26:50.606292 systemd[1]: var-lib-kubelet-pods-cb0fbaca\x2d7023\x2d468b\x2dab48\x2d81ede1d0801c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwgn5l.mount: Deactivated successfully. Jul 2 00:26:50.606504 systemd[1]: var-lib-kubelet-pods-cb0fbaca\x2d7023\x2d468b\x2dab48\x2d81ede1d0801c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:26:50.606654 systemd[1]: var-lib-kubelet-pods-cb0fbaca\x2d7023\x2d468b\x2dab48\x2d81ede1d0801c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:26:50.928622 kubelet[2643]: I0702 00:26:50.928224 2643 scope.go:117] "RemoveContainer" containerID="8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d" Jul 2 00:26:50.934648 containerd[1446]: time="2024-07-02T00:26:50.934468276Z" level=info msg="RemoveContainer for \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\"" Jul 2 00:26:50.953976 systemd[1]: Removed slice kubepods-besteffort-podfa81e257_5e09_45cc_8082_d337e7fa37d9.slice - libcontainer container kubepods-besteffort-podfa81e257_5e09_45cc_8082_d337e7fa37d9.slice. Jul 2 00:26:50.968548 containerd[1446]: time="2024-07-02T00:26:50.968300580Z" level=info msg="RemoveContainer for \"8c19fe01d32548298cdbb443977bc9aebd5b3fb89b933059a26aad2e0cb14b2d\" returns successfully" Jul 2 00:26:50.997428 kubelet[2643]: I0702 00:26:50.997330 2643 scope.go:117] "RemoveContainer" containerID="af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b" Jul 2 00:26:51.000913 systemd[1]: Removed slice kubepods-burstable-podcb0fbaca_7023_468b_ab48_81ede1d0801c.slice - libcontainer container kubepods-burstable-podcb0fbaca_7023_468b_ab48_81ede1d0801c.slice. Jul 2 00:26:51.001487 systemd[1]: kubepods-burstable-podcb0fbaca_7023_468b_ab48_81ede1d0801c.slice: Consumed 9.041s CPU time. Jul 2 00:26:51.017358 containerd[1446]: time="2024-07-02T00:26:51.016466240Z" level=info msg="RemoveContainer for \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\"" Jul 2 00:26:51.060536 containerd[1446]: time="2024-07-02T00:26:51.060448471Z" level=info msg="RemoveContainer for \"af1c7aae42c53ca5a2a72f1346a4c0b05114a38ba609a4e8d132e67adce1e01b\" returns successfully" Jul 2 00:26:51.060858 kubelet[2643]: I0702 00:26:51.060799 2643 scope.go:117] "RemoveContainer" containerID="e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114" Jul 2 00:26:51.062955 containerd[1446]: time="2024-07-02T00:26:51.062855856Z" level=info msg="RemoveContainer for \"e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114\"" Jul 2 00:26:51.093932 containerd[1446]: time="2024-07-02T00:26:51.093862249Z" level=info msg="RemoveContainer for \"e6348944740fc5235d3aa5a0179be6da3022016987ceeeb0c38bf2e1b0e92114\" returns successfully" Jul 2 00:26:51.094647 kubelet[2643]: I0702 00:26:51.094558 2643 scope.go:117] "RemoveContainer" containerID="f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a" Jul 2 00:26:51.098492 containerd[1446]: time="2024-07-02T00:26:51.098415739Z" level=info msg="RemoveContainer for \"f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a\"" Jul 2 00:26:51.105227 containerd[1446]: time="2024-07-02T00:26:51.104669440Z" level=info msg="RemoveContainer for \"f15a8b5ba252631a6830b816055db8ea2e580cd07e53af16ff466188c60e7e0a\" returns successfully" Jul 2 00:26:51.106270 kubelet[2643]: I0702 00:26:51.104970 2643 scope.go:117] "RemoveContainer" containerID="c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5" Jul 2 00:26:51.115528 containerd[1446]: time="2024-07-02T00:26:51.115286246Z" level=info msg="RemoveContainer for \"c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5\"" Jul 2 00:26:51.125206 containerd[1446]: time="2024-07-02T00:26:51.125098813Z" level=info msg="RemoveContainer for \"c541a2b4a9a581b1bdf5c80329aad0fc33406f2d321cd1de9a7449959529f7e5\" returns successfully" Jul 2 00:26:51.128812 kubelet[2643]: I0702 00:26:51.127328 2643 scope.go:117] "RemoveContainer" containerID="90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58" Jul 2 00:26:51.135858 containerd[1446]: time="2024-07-02T00:26:51.134810018Z" level=info msg="RemoveContainer for \"90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58\"" Jul 2 00:26:51.143872 containerd[1446]: time="2024-07-02T00:26:51.143788010Z" level=info msg="RemoveContainer for \"90d565fa2b42805e1f951579e3828c3e6e1e983390f4b0140c59f034fbb55b58\" returns successfully" Jul 2 00:26:51.416972 kubelet[2643]: I0702 00:26:51.416900 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0fbaca-7023-468b-ab48-81ede1d0801c" path="/var/lib/kubelet/pods/cb0fbaca-7023-468b-ab48-81ede1d0801c/volumes" Jul 2 00:26:51.418492 kubelet[2643]: I0702 00:26:51.418409 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa81e257-5e09-45cc-8082-d337e7fa37d9" path="/var/lib/kubelet/pods/fa81e257-5e09-45cc-8082-d337e7fa37d9/volumes" Jul 2 00:26:51.682032 sshd[4196]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:51.692020 systemd[1]: sshd@22-172.24.4.162:22-172.24.4.1:59290.service: Deactivated successfully. Jul 2 00:26:51.697365 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:26:51.698009 systemd[1]: session-25.scope: Consumed 1.299s CPU time. Jul 2 00:26:51.699723 systemd-logind[1429]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:26:51.707643 systemd[1]: Started sshd@23-172.24.4.162:22-172.24.4.1:59306.service - OpenSSH per-connection server daemon (172.24.4.1:59306). Jul 2 00:26:51.710443 systemd-logind[1429]: Removed session 25. Jul 2 00:26:53.086308 sshd[4361]: Accepted publickey for core from 172.24.4.1 port 59306 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:53.089039 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:53.099546 systemd-logind[1429]: New session 26 of user core. Jul 2 00:26:53.111366 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:26:54.409231 kubelet[2643]: E0702 00:26:54.407681 2643 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-qcz6x" podUID="dff67a27-f4dd-4eeb-9976-c78a8785cd57" Jul 2 00:26:54.562105 kubelet[2643]: I0702 00:26:54.560464 2643 topology_manager.go:215] "Topology Admit Handler" podUID="44d1c5ee-d652-463a-a812-121816b29c58" podNamespace="kube-system" podName="cilium-874dq" Jul 2 00:26:54.562105 kubelet[2643]: E0702 00:26:54.560572 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb0fbaca-7023-468b-ab48-81ede1d0801c" containerName="apply-sysctl-overwrites" Jul 2 00:26:54.562105 kubelet[2643]: E0702 00:26:54.560594 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa81e257-5e09-45cc-8082-d337e7fa37d9" containerName="cilium-operator" Jul 2 00:26:54.562105 kubelet[2643]: E0702 00:26:54.560610 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb0fbaca-7023-468b-ab48-81ede1d0801c" containerName="clean-cilium-state" Jul 2 00:26:54.562105 kubelet[2643]: E0702 00:26:54.560624 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb0fbaca-7023-468b-ab48-81ede1d0801c" containerName="cilium-agent" Jul 2 00:26:54.562105 kubelet[2643]: E0702 00:26:54.560641 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb0fbaca-7023-468b-ab48-81ede1d0801c" containerName="mount-cgroup" Jul 2 00:26:54.562105 kubelet[2643]: E0702 00:26:54.560730 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb0fbaca-7023-468b-ab48-81ede1d0801c" containerName="mount-bpf-fs" Jul 2 00:26:54.562105 kubelet[2643]: I0702 00:26:54.560792 2643 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0fbaca-7023-468b-ab48-81ede1d0801c" containerName="cilium-agent" Jul 2 00:26:54.562105 kubelet[2643]: I0702 00:26:54.560808 2643 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa81e257-5e09-45cc-8082-d337e7fa37d9" containerName="cilium-operator" Jul 2 00:26:54.576417 systemd[1]: Created slice kubepods-burstable-pod44d1c5ee_d652_463a_a812_121816b29c58.slice - libcontainer container kubepods-burstable-pod44d1c5ee_d652_463a_a812_121816b29c58.slice. Jul 2 00:26:54.581449 kubelet[2643]: W0702 00:26:54.581393 2643 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:26:54.581449 kubelet[2643]: E0702 00:26:54.581424 2643 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:26:54.581740 kubelet[2643]: W0702 00:26:54.581702 2643 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:26:54.581740 kubelet[2643]: E0702 00:26:54.581721 2643 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:26:54.582801 kubelet[2643]: W0702 00:26:54.582754 2643 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:26:54.582801 kubelet[2643]: E0702 00:26:54.582772 2643 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:26:54.583168 kubelet[2643]: W0702 00:26:54.583154 2643 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:26:54.583248 kubelet[2643]: E0702 00:26:54.583238 2643 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3975-1-1-5-578c77618a.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-1-1-5-578c77618a.novalocal' and this object Jul 2 00:26:54.661341 kubelet[2643]: I0702 00:26:54.661240 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-etc-cni-netd\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.661512 kubelet[2643]: I0702 00:26:54.661498 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-lib-modules\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.661644 kubelet[2643]: I0702 00:26:54.661622 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44d1c5ee-d652-463a-a812-121816b29c58-clustermesh-secrets\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.661759 kubelet[2643]: I0702 00:26:54.661746 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44d1c5ee-d652-463a-a812-121816b29c58-hubble-tls\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.661875 kubelet[2643]: I0702 00:26:54.661863 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-cni-path\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.662027 kubelet[2643]: I0702 00:26:54.661994 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-xtables-lock\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.662258 kubelet[2643]: I0702 00:26:54.662221 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-cilium-run\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.662442 kubelet[2643]: I0702 00:26:54.662389 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-bpf-maps\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.662442 kubelet[2643]: I0702 00:26:54.662414 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-host-proc-sys-net\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.662646 kubelet[2643]: I0702 00:26:54.662610 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-hostproc\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.662793 kubelet[2643]: I0702 00:26:54.662746 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-cilium-cgroup\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.662793 kubelet[2643]: I0702 00:26:54.662771 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44d1c5ee-d652-463a-a812-121816b29c58-cilium-config-path\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.663035 kubelet[2643]: I0702 00:26:54.662904 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/44d1c5ee-d652-463a-a812-121816b29c58-cilium-ipsec-secrets\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.663035 kubelet[2643]: I0702 00:26:54.662930 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44d1c5ee-d652-463a-a812-121816b29c58-host-proc-sys-kernel\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.663035 kubelet[2643]: I0702 00:26:54.662990 2643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf6qc\" (UniqueName: \"kubernetes.io/projected/44d1c5ee-d652-463a-a812-121816b29c58-kube-api-access-tf6qc\") pod \"cilium-874dq\" (UID: \"44d1c5ee-d652-463a-a812-121816b29c58\") " pod="kube-system/cilium-874dq" Jul 2 00:26:54.686143 sshd[4361]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:54.694531 systemd[1]: sshd@23-172.24.4.162:22-172.24.4.1:59306.service: Deactivated successfully. Jul 2 00:26:54.696895 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:26:54.697788 systemd-logind[1429]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:26:54.704691 systemd[1]: Started sshd@24-172.24.4.162:22-172.24.4.1:41534.service - OpenSSH per-connection server daemon (172.24.4.1:41534). Jul 2 00:26:54.710458 systemd-logind[1429]: Removed session 26. Jul 2 00:26:55.409112 kubelet[2643]: E0702 00:26:55.408442 2643 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-hbfpg" podUID="2a39ff84-d35f-4c60-bb00-80c8f5aaf18b" Jul 2 00:26:55.554154 kubelet[2643]: E0702 00:26:55.554087 2643 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:26:55.767532 kubelet[2643]: E0702 00:26:55.766338 2643 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 2 00:26:55.767532 kubelet[2643]: E0702 00:26:55.766559 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d1c5ee-d652-463a-a812-121816b29c58-clustermesh-secrets podName:44d1c5ee-d652-463a-a812-121816b29c58 nodeName:}" failed. No retries permitted until 2024-07-02 00:26:56.266513643 +0000 UTC m=+141.174004993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/44d1c5ee-d652-463a-a812-121816b29c58-clustermesh-secrets") pod "cilium-874dq" (UID: "44d1c5ee-d652-463a-a812-121816b29c58") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:26:55.767532 kubelet[2643]: E0702 00:26:55.766612 2643 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 2 00:26:55.767532 kubelet[2643]: E0702 00:26:55.766688 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d1c5ee-d652-463a-a812-121816b29c58-cilium-ipsec-secrets podName:44d1c5ee-d652-463a-a812-121816b29c58 nodeName:}" failed. No retries permitted until 2024-07-02 00:26:56.266667567 +0000 UTC m=+141.174158918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/44d1c5ee-d652-463a-a812-121816b29c58-cilium-ipsec-secrets") pod "cilium-874dq" (UID: "44d1c5ee-d652-463a-a812-121816b29c58") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:26:55.767532 kubelet[2643]: E0702 00:26:55.767169 2643 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:26:55.768161 kubelet[2643]: E0702 00:26:55.767322 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d1c5ee-d652-463a-a812-121816b29c58-cilium-config-path podName:44d1c5ee-d652-463a-a812-121816b29c58 nodeName:}" failed. No retries permitted until 2024-07-02 00:26:56.26729675 +0000 UTC m=+141.174788102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/44d1c5ee-d652-463a-a812-121816b29c58-cilium-config-path") pod "cilium-874dq" (UID: "44d1c5ee-d652-463a-a812-121816b29c58") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:26:55.769234 kubelet[2643]: E0702 00:26:55.768411 2643 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 00:26:55.769234 kubelet[2643]: E0702 00:26:55.768492 2643 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-874dq: failed to sync secret cache: timed out waiting for the condition Jul 2 00:26:55.769234 kubelet[2643]: E0702 00:26:55.768622 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44d1c5ee-d652-463a-a812-121816b29c58-hubble-tls podName:44d1c5ee-d652-463a-a812-121816b29c58 nodeName:}" failed. No retries permitted until 2024-07-02 00:26:56.268584064 +0000 UTC m=+141.176075414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/44d1c5ee-d652-463a-a812-121816b29c58-hubble-tls") pod "cilium-874dq" (UID: "44d1c5ee-d652-463a-a812-121816b29c58") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:26:56.173288 sshd[4373]: Accepted publickey for core from 172.24.4.1 port 41534 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:56.178036 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:56.193007 systemd-logind[1429]: New session 27 of user core. Jul 2 00:26:56.208397 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:26:56.382544 containerd[1446]: time="2024-07-02T00:26:56.381872059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-874dq,Uid:44d1c5ee-d652-463a-a812-121816b29c58,Namespace:kube-system,Attempt:0,}" Jul 2 00:26:56.408883 kubelet[2643]: E0702 00:26:56.407604 2643 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-qcz6x" podUID="dff67a27-f4dd-4eeb-9976-c78a8785cd57" Jul 2 00:26:56.440968 containerd[1446]: time="2024-07-02T00:26:56.440146762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:26:56.440968 containerd[1446]: time="2024-07-02T00:26:56.440290788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:56.440968 containerd[1446]: time="2024-07-02T00:26:56.440353698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:26:56.440968 containerd[1446]: time="2024-07-02T00:26:56.440402031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:56.482293 systemd[1]: Started cri-containerd-976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358.scope - libcontainer container 976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358. Jul 2 00:26:56.513636 containerd[1446]: time="2024-07-02T00:26:56.513583796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-874dq,Uid:44d1c5ee-d652-463a-a812-121816b29c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\"" Jul 2 00:26:56.521288 containerd[1446]: time="2024-07-02T00:26:56.521181782Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:26:56.537472 containerd[1446]: time="2024-07-02T00:26:56.537369673Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f92e05db784419c4dc879e55ed69a588c6574e7bc9d18708b2cae71864fc5629\"" Jul 2 00:26:56.539943 containerd[1446]: time="2024-07-02T00:26:56.538670222Z" level=info msg="StartContainer for \"f92e05db784419c4dc879e55ed69a588c6574e7bc9d18708b2cae71864fc5629\"" Jul 2 00:26:56.587259 systemd[1]: Started cri-containerd-f92e05db784419c4dc879e55ed69a588c6574e7bc9d18708b2cae71864fc5629.scope - libcontainer container f92e05db784419c4dc879e55ed69a588c6574e7bc9d18708b2cae71864fc5629. Jul 2 00:26:56.623749 containerd[1446]: time="2024-07-02T00:26:56.623703702Z" level=info msg="StartContainer for \"f92e05db784419c4dc879e55ed69a588c6574e7bc9d18708b2cae71864fc5629\" returns successfully" Jul 2 00:26:56.643986 systemd[1]: cri-containerd-f92e05db784419c4dc879e55ed69a588c6574e7bc9d18708b2cae71864fc5629.scope: Deactivated successfully. Jul 2 00:26:56.699596 containerd[1446]: time="2024-07-02T00:26:56.699380350Z" level=info msg="shim disconnected" id=f92e05db784419c4dc879e55ed69a588c6574e7bc9d18708b2cae71864fc5629 namespace=k8s.io Jul 2 00:26:56.699596 containerd[1446]: time="2024-07-02T00:26:56.699455534Z" level=warning msg="cleaning up after shim disconnected" id=f92e05db784419c4dc879e55ed69a588c6574e7bc9d18708b2cae71864fc5629 namespace=k8s.io Jul 2 00:26:56.699596 containerd[1446]: time="2024-07-02T00:26:56.699468568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:56.981349 sshd[4373]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:57.000805 systemd[1]: sshd@24-172.24.4.162:22-172.24.4.1:41534.service: Deactivated successfully. Jul 2 00:26:57.010719 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:26:57.012674 containerd[1446]: time="2024-07-02T00:26:57.012122324Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:26:57.017308 systemd-logind[1429]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:26:57.027678 systemd[1]: Started sshd@25-172.24.4.162:22-172.24.4.1:41538.service - OpenSSH per-connection server daemon (172.24.4.1:41538). Jul 2 00:26:57.039252 systemd-logind[1429]: Removed session 27. Jul 2 00:26:57.053891 containerd[1446]: time="2024-07-02T00:26:57.053798806Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2b385dacecf8b0a429124eb866877bda1a4f621328ea137b9711bfe676cbf617\"" Jul 2 00:26:57.058113 containerd[1446]: time="2024-07-02T00:26:57.057418943Z" level=info msg="StartContainer for \"2b385dacecf8b0a429124eb866877bda1a4f621328ea137b9711bfe676cbf617\"" Jul 2 00:26:57.096400 systemd[1]: Started cri-containerd-2b385dacecf8b0a429124eb866877bda1a4f621328ea137b9711bfe676cbf617.scope - libcontainer container 2b385dacecf8b0a429124eb866877bda1a4f621328ea137b9711bfe676cbf617. Jul 2 00:26:57.135509 containerd[1446]: time="2024-07-02T00:26:57.135441928Z" level=info msg="StartContainer for \"2b385dacecf8b0a429124eb866877bda1a4f621328ea137b9711bfe676cbf617\" returns successfully" Jul 2 00:26:57.147271 systemd[1]: cri-containerd-2b385dacecf8b0a429124eb866877bda1a4f621328ea137b9711bfe676cbf617.scope: Deactivated successfully. Jul 2 00:26:57.179959 containerd[1446]: time="2024-07-02T00:26:57.179869915Z" level=info msg="shim disconnected" id=2b385dacecf8b0a429124eb866877bda1a4f621328ea137b9711bfe676cbf617 namespace=k8s.io Jul 2 00:26:57.179959 containerd[1446]: time="2024-07-02T00:26:57.179957923Z" level=warning msg="cleaning up after shim disconnected" id=2b385dacecf8b0a429124eb866877bda1a4f621328ea137b9711bfe676cbf617 namespace=k8s.io Jul 2 00:26:57.179959 containerd[1446]: time="2024-07-02T00:26:57.179970818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:57.195425 containerd[1446]: time="2024-07-02T00:26:57.195372036Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:26:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:26:57.409294 kubelet[2643]: E0702 00:26:57.407396 2643 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-hbfpg" podUID="2a39ff84-d35f-4c60-bb00-80c8f5aaf18b" Jul 2 00:26:58.015871 containerd[1446]: time="2024-07-02T00:26:58.015692814Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:26:58.094896 containerd[1446]: time="2024-07-02T00:26:58.094853425Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6\"" Jul 2 00:26:58.097167 containerd[1446]: time="2024-07-02T00:26:58.097131635Z" level=info msg="StartContainer for \"91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6\"" Jul 2 00:26:58.149279 systemd[1]: Started cri-containerd-91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6.scope - libcontainer container 91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6. Jul 2 00:26:58.218193 containerd[1446]: time="2024-07-02T00:26:58.217945207Z" level=info msg="StartContainer for \"91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6\" returns successfully" Jul 2 00:26:58.227315 systemd[1]: cri-containerd-91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6.scope: Deactivated successfully. Jul 2 00:26:58.262521 containerd[1446]: time="2024-07-02T00:26:58.262401252Z" level=info msg="shim disconnected" id=91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6 namespace=k8s.io Jul 2 00:26:58.262831 containerd[1446]: time="2024-07-02T00:26:58.262611314Z" level=warning msg="cleaning up after shim disconnected" id=91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6 namespace=k8s.io Jul 2 00:26:58.262831 containerd[1446]: time="2024-07-02T00:26:58.262627596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:58.286475 systemd[1]: run-containerd-runc-k8s.io-91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6-runc.gsgGqW.mount: Deactivated successfully. Jul 2 00:26:58.286591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91ccf4e1ebfd26a3c557dfa9820f1291adca39b1ac038db5d58df6d78ae3f0f6-rootfs.mount: Deactivated successfully. Jul 2 00:26:58.330903 sshd[4486]: Accepted publickey for core from 172.24.4.1 port 41538 ssh2: RSA SHA256:PCQlpQPF3MQUBJB7FGXO+NVNbcy5KrkTt8QQEvm9Ado Jul 2 00:26:58.333231 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:58.339305 systemd-logind[1429]: New session 28 of user core. Jul 2 00:26:58.346259 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:26:58.350877 kubelet[2643]: I0702 00:26:58.350648 2643 setters.go:580] "Node became not ready" node="ci-3975-1-1-5-578c77618a.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:26:58Z","lastTransitionTime":"2024-07-02T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:26:58.408025 kubelet[2643]: E0702 00:26:58.407851 2643 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-qcz6x" podUID="dff67a27-f4dd-4eeb-9976-c78a8785cd57" Jul 2 00:26:59.028144 containerd[1446]: time="2024-07-02T00:26:59.027990693Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:26:59.059218 containerd[1446]: time="2024-07-02T00:26:59.059173844Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3\"" Jul 2 00:26:59.059882 containerd[1446]: time="2024-07-02T00:26:59.059807186Z" level=info msg="StartContainer for \"6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3\"" Jul 2 00:26:59.109210 systemd[1]: Started cri-containerd-6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3.scope - libcontainer container 6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3. Jul 2 00:26:59.139933 systemd[1]: cri-containerd-6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3.scope: Deactivated successfully. Jul 2 00:26:59.149740 containerd[1446]: time="2024-07-02T00:26:59.149640889Z" level=info msg="StartContainer for \"6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3\" returns successfully" Jul 2 00:26:59.175547 containerd[1446]: time="2024-07-02T00:26:59.175344044Z" level=info msg="shim disconnected" id=6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3 namespace=k8s.io Jul 2 00:26:59.175547 containerd[1446]: time="2024-07-02T00:26:59.175393980Z" level=warning msg="cleaning up after shim disconnected" id=6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3 namespace=k8s.io Jul 2 00:26:59.175547 containerd[1446]: time="2024-07-02T00:26:59.175403638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:26:59.288433 systemd[1]: run-containerd-runc-k8s.io-6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3-runc.HhBfGa.mount: Deactivated successfully. Jul 2 00:26:59.288659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6433e0b383087548bf0f3c0414963711faff1caa8af8d754db988384cac2cef3-rootfs.mount: Deactivated successfully. Jul 2 00:26:59.408527 kubelet[2643]: E0702 00:26:59.407872 2643 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-hbfpg" podUID="2a39ff84-d35f-4c60-bb00-80c8f5aaf18b" Jul 2 00:27:00.034113 containerd[1446]: time="2024-07-02T00:27:00.033530646Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:27:00.077567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76134972.mount: Deactivated successfully. Jul 2 00:27:00.091291 containerd[1446]: time="2024-07-02T00:27:00.091201108Z" level=info msg="CreateContainer within sandbox \"976f6e16779d3109f9167c665f2d2846aa8feb28ab61380c7ad3e8a2bc1db358\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"84a5f344748e52f8476d54c12fbcd7abbe396df20955471acf0e79fc0e1b6c4e\"" Jul 2 00:27:00.098096 containerd[1446]: time="2024-07-02T00:27:00.095272530Z" level=info msg="StartContainer for \"84a5f344748e52f8476d54c12fbcd7abbe396df20955471acf0e79fc0e1b6c4e\"" Jul 2 00:27:00.145283 systemd[1]: Started cri-containerd-84a5f344748e52f8476d54c12fbcd7abbe396df20955471acf0e79fc0e1b6c4e.scope - libcontainer container 84a5f344748e52f8476d54c12fbcd7abbe396df20955471acf0e79fc0e1b6c4e. Jul 2 00:27:00.181752 containerd[1446]: time="2024-07-02T00:27:00.181710493Z" level=info msg="StartContainer for \"84a5f344748e52f8476d54c12fbcd7abbe396df20955471acf0e79fc0e1b6c4e\" returns successfully" Jul 2 00:27:00.407429 kubelet[2643]: E0702 00:27:00.407099 2643 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-qcz6x" podUID="dff67a27-f4dd-4eeb-9976-c78a8785cd57" Jul 2 00:27:00.936129 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:27:01.005696 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jul 2 00:27:01.356622 systemd[1]: run-containerd-runc-k8s.io-84a5f344748e52f8476d54c12fbcd7abbe396df20955471acf0e79fc0e1b6c4e-runc.P9UIHd.mount: Deactivated successfully. Jul 2 00:27:04.341797 systemd-networkd[1370]: lxc_health: Link UP Jul 2 00:27:04.351258 systemd-networkd[1370]: lxc_health: Gained carrier Jul 2 00:27:04.422381 kubelet[2643]: I0702 00:27:04.421230 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-874dq" podStartSLOduration=10.421209663 podStartE2EDuration="10.421209663s" podCreationTimestamp="2024-07-02 00:26:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:27:01.063094177 +0000 UTC m=+145.970585477" watchObservedRunningTime="2024-07-02 00:27:04.421209663 +0000 UTC m=+149.328700963" Jul 2 00:27:05.406251 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jul 2 00:27:06.035605 systemd[1]: run-containerd-runc-k8s.io-84a5f344748e52f8476d54c12fbcd7abbe396df20955471acf0e79fc0e1b6c4e-runc.o6gilu.mount: Deactivated successfully. Jul 2 00:27:08.260220 systemd[1]: run-containerd-runc-k8s.io-84a5f344748e52f8476d54c12fbcd7abbe396df20955471acf0e79fc0e1b6c4e-runc.LYnLgq.mount: Deactivated successfully. Jul 2 00:27:10.790945 sshd[4486]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:10.799774 systemd[1]: sshd@25-172.24.4.162:22-172.24.4.1:41538.service: Deactivated successfully. Jul 2 00:27:10.806741 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:27:10.809174 systemd-logind[1429]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:27:10.812282 systemd-logind[1429]: Removed session 28.