Sep 4 17:39:32.094351 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:39:32.094380 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:39:32.094393 kernel: BIOS-provided physical RAM map: Sep 4 17:39:32.094401 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:39:32.094409 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:39:32.094416 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:39:32.094425 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Sep 4 17:39:32.094433 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Sep 4 17:39:32.094441 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 17:39:32.094451 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:39:32.094460 kernel: NX (Execute Disable) protection: active Sep 4 17:39:32.094467 kernel: APIC: Static calls initialized Sep 4 17:39:32.094475 kernel: SMBIOS 2.8 present. Sep 4 17:39:32.094483 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Sep 4 17:39:32.094492 kernel: Hypervisor detected: KVM Sep 4 17:39:32.094503 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:39:32.094511 kernel: kvm-clock: using sched offset of 4815630411 cycles Sep 4 17:39:32.096527 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:39:32.096552 kernel: tsc: Detected 1996.249 MHz processor Sep 4 17:39:32.096563 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:39:32.096572 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:39:32.096581 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Sep 4 17:39:32.096590 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:39:32.096598 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:39:32.096612 kernel: ACPI: Early table checksum verification disabled Sep 4 17:39:32.096620 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Sep 4 17:39:32.096629 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:39:32.096637 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:39:32.096646 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:39:32.096654 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 4 17:39:32.096662 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:39:32.096671 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:39:32.096679 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Sep 4 17:39:32.096690 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Sep 4 17:39:32.096698 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 4 17:39:32.096707 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Sep 4 17:39:32.096715 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Sep 4 17:39:32.096723 kernel: No NUMA configuration found Sep 4 17:39:32.096731 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Sep 4 17:39:32.096740 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Sep 4 17:39:32.096751 kernel: Zone ranges: Sep 4 17:39:32.096762 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:39:32.096771 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Sep 4 17:39:32.096780 kernel: Normal empty Sep 4 17:39:32.096788 kernel: Movable zone start for each node Sep 4 17:39:32.096797 kernel: Early memory node ranges Sep 4 17:39:32.096806 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:39:32.096817 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Sep 4 17:39:32.096826 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Sep 4 17:39:32.096834 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:39:32.096843 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:39:32.096852 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Sep 4 17:39:32.096861 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 17:39:32.096869 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:39:32.096878 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:39:32.096887 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:39:32.096898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:39:32.096907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:39:32.096916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:39:32.096925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:39:32.096934 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:39:32.096943 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:39:32.096952 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:39:32.096961 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 4 17:39:32.096969 kernel: Booting paravirtualized kernel on KVM Sep 4 17:39:32.096978 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:39:32.096990 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:39:32.096999 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:39:32.097008 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:39:32.097016 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:39:32.097025 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 4 17:39:32.097035 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:39:32.097045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:39:32.097055 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:39:32.097064 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:39:32.097073 kernel: Fallback order for Node 0: 0 Sep 4 17:39:32.097082 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Sep 4 17:39:32.097090 kernel: Policy zone: DMA32 Sep 4 17:39:32.097099 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:39:32.097109 kernel: Memory: 1965068K/2096620K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 131292K reserved, 0K cma-reserved) Sep 4 17:39:32.097118 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:39:32.097126 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:39:32.097137 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:39:32.097146 kernel: Dynamic Preempt: voluntary Sep 4 17:39:32.097155 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:39:32.097165 kernel: rcu: RCU event tracing is enabled. Sep 4 17:39:32.097174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:39:32.097184 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:39:32.097193 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:39:32.097202 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:39:32.097211 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:39:32.097220 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:39:32.097231 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 17:39:32.097240 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:39:32.097249 kernel: Console: colour VGA+ 80x25 Sep 4 17:39:32.097258 kernel: printk: console [tty0] enabled Sep 4 17:39:32.097266 kernel: printk: console [ttyS0] enabled Sep 4 17:39:32.097275 kernel: ACPI: Core revision 20230628 Sep 4 17:39:32.097284 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:39:32.097293 kernel: x2apic enabled Sep 4 17:39:32.097302 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:39:32.097313 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:39:32.097322 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:39:32.097332 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Sep 4 17:39:32.097340 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 4 17:39:32.097349 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 4 17:39:32.097358 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:39:32.097367 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:39:32.097376 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:39:32.097385 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:39:32.097395 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:39:32.097404 kernel: x86/fpu: x87 FPU will use FXSAVE Sep 4 17:39:32.097413 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:39:32.097422 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:39:32.097431 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:39:32.097440 kernel: SELinux: Initializing. Sep 4 17:39:32.097449 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:39:32.097458 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:39:32.097476 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Sep 4 17:39:32.097485 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:39:32.097495 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:39:32.097506 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:39:32.097538 kernel: Performance Events: AMD PMU driver. Sep 4 17:39:32.097548 kernel: ... version: 0 Sep 4 17:39:32.097557 kernel: ... bit width: 48 Sep 4 17:39:32.097567 kernel: ... generic registers: 4 Sep 4 17:39:32.097579 kernel: ... value mask: 0000ffffffffffff Sep 4 17:39:32.097588 kernel: ... max period: 00007fffffffffff Sep 4 17:39:32.097598 kernel: ... fixed-purpose events: 0 Sep 4 17:39:32.097607 kernel: ... event mask: 000000000000000f Sep 4 17:39:32.097617 kernel: signal: max sigframe size: 1440 Sep 4 17:39:32.097626 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:39:32.097636 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:39:32.097646 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:39:32.097655 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:39:32.097664 kernel: .... node #0, CPUs: #1 Sep 4 17:39:32.097676 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:39:32.097685 kernel: smpboot: Max logical packages: 2 Sep 4 17:39:32.097695 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Sep 4 17:39:32.097704 kernel: devtmpfs: initialized Sep 4 17:39:32.097713 kernel: x86/mm: Memory block size: 128MB Sep 4 17:39:32.097723 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:39:32.097733 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:39:32.097742 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:39:32.097751 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:39:32.097762 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:39:32.097772 kernel: audit: type=2000 audit(1725471570.900:1): state=initialized audit_enabled=0 res=1 Sep 4 17:39:32.097781 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:39:32.097791 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:39:32.097800 kernel: cpuidle: using governor menu Sep 4 17:39:32.097810 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:39:32.097819 kernel: dca service started, version 1.12.1 Sep 4 17:39:32.097844 kernel: PCI: Using configuration type 1 for base access Sep 4 17:39:32.097854 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:39:32.097866 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:39:32.097875 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:39:32.097884 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:39:32.097894 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:39:32.097904 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:39:32.097913 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:39:32.097922 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:39:32.097932 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:39:32.097941 kernel: ACPI: Interpreter enabled Sep 4 17:39:32.097952 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:39:32.097962 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:39:32.097971 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:39:32.097981 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:39:32.097990 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:39:32.097999 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:39:32.098203 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:39:32.098313 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 17:39:32.098419 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 17:39:32.098445 kernel: acpiphp: Slot [3] registered Sep 4 17:39:32.098455 kernel: acpiphp: Slot [4] registered Sep 4 17:39:32.098465 kernel: acpiphp: Slot [5] registered Sep 4 17:39:32.098474 kernel: acpiphp: Slot [6] registered Sep 4 17:39:32.098484 kernel: acpiphp: Slot [7] registered Sep 4 17:39:32.098493 kernel: acpiphp: Slot [8] registered Sep 4 17:39:32.098503 kernel: acpiphp: Slot [9] registered Sep 4 17:39:32.098545 kernel: acpiphp: Slot [10] registered Sep 4 17:39:32.099658 kernel: acpiphp: Slot [11] registered Sep 4 17:39:32.099673 kernel: acpiphp: Slot [12] registered Sep 4 17:39:32.099686 kernel: acpiphp: Slot [13] registered Sep 4 17:39:32.099699 kernel: acpiphp: Slot [14] registered Sep 4 17:39:32.099708 kernel: acpiphp: Slot [15] registered Sep 4 17:39:32.099717 kernel: acpiphp: Slot [16] registered Sep 4 17:39:32.099725 kernel: acpiphp: Slot [17] registered Sep 4 17:39:32.099734 kernel: acpiphp: Slot [18] registered Sep 4 17:39:32.099743 kernel: acpiphp: Slot [19] registered Sep 4 17:39:32.099758 kernel: acpiphp: Slot [20] registered Sep 4 17:39:32.099766 kernel: acpiphp: Slot [21] registered Sep 4 17:39:32.099775 kernel: acpiphp: Slot [22] registered Sep 4 17:39:32.099784 kernel: acpiphp: Slot [23] registered Sep 4 17:39:32.099793 kernel: acpiphp: Slot [24] registered Sep 4 17:39:32.099803 kernel: acpiphp: Slot [25] registered Sep 4 17:39:32.099812 kernel: acpiphp: Slot [26] registered Sep 4 17:39:32.099822 kernel: acpiphp: Slot [27] registered Sep 4 17:39:32.099831 kernel: acpiphp: Slot [28] registered Sep 4 17:39:32.099842 kernel: acpiphp: Slot [29] registered Sep 4 17:39:32.099851 kernel: acpiphp: Slot [30] registered Sep 4 17:39:32.099861 kernel: acpiphp: Slot [31] registered Sep 4 17:39:32.099870 kernel: PCI host bridge to bus 0000:00 Sep 4 17:39:32.100088 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:39:32.100188 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:39:32.100280 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:39:32.100389 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 17:39:32.101580 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 17:39:32.101692 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:39:32.101818 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:39:32.101957 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:39:32.102616 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:39:32.102723 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Sep 4 17:39:32.102838 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:39:32.102935 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:39:32.103033 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:39:32.103130 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:39:32.103242 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:39:32.103339 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 17:39:32.103433 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 17:39:32.104603 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 4 17:39:32.104718 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 4 17:39:32.104815 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 4 17:39:32.104911 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Sep 4 17:39:32.105006 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Sep 4 17:39:32.105104 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:39:32.105223 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:39:32.105321 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Sep 4 17:39:32.105419 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Sep 4 17:39:32.106571 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 4 17:39:32.106697 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Sep 4 17:39:32.106803 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:39:32.106899 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:39:32.107007 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Sep 4 17:39:32.107104 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 4 17:39:32.107210 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Sep 4 17:39:32.107309 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Sep 4 17:39:32.107408 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 4 17:39:32.108564 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:39:32.108689 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Sep 4 17:39:32.108796 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 4 17:39:32.108810 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:39:32.108820 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:39:32.108830 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:39:32.108840 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:39:32.108850 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:39:32.108859 kernel: iommu: Default domain type: Translated Sep 4 17:39:32.108869 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:39:32.108879 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:39:32.108892 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:39:32.108902 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:39:32.108912 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Sep 4 17:39:32.109007 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:39:32.109105 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:39:32.109226 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:39:32.109241 kernel: vgaarb: loaded Sep 4 17:39:32.109251 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:39:32.109260 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:39:32.109275 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:39:32.109284 kernel: pnp: PnP ACPI init Sep 4 17:39:32.109391 kernel: pnp 00:03: [dma 2] Sep 4 17:39:32.109408 kernel: pnp: PnP ACPI: found 5 devices Sep 4 17:39:32.109418 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:39:32.109428 kernel: NET: Registered PF_INET protocol family Sep 4 17:39:32.109438 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:39:32.109447 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 17:39:32.109461 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:39:32.109471 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:39:32.109481 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 17:39:32.109491 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 17:39:32.109500 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:39:32.109510 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:39:32.109538 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:39:32.109548 kernel: NET: Registered PF_XDP protocol family Sep 4 17:39:32.109643 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:39:32.109763 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:39:32.109894 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:39:32.109988 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 17:39:32.110074 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 17:39:32.110177 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:39:32.110279 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:39:32.110294 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:39:32.110309 kernel: Initialise system trusted keyrings Sep 4 17:39:32.110319 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 17:39:32.110334 kernel: Key type asymmetric registered Sep 4 17:39:32.110348 kernel: Asymmetric key parser 'x509' registered Sep 4 17:39:32.110364 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:39:32.110378 kernel: io scheduler mq-deadline registered Sep 4 17:39:32.110393 kernel: io scheduler kyber registered Sep 4 17:39:32.110403 kernel: io scheduler bfq registered Sep 4 17:39:32.110412 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:39:32.110435 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 4 17:39:32.110446 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:39:32.110456 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 17:39:32.110465 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:39:32.110475 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:39:32.110485 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:39:32.110495 kernel: random: crng init done Sep 4 17:39:32.110505 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:39:32.112580 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:39:32.112598 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:39:32.112802 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 17:39:32.112895 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 17:39:32.112983 kernel: rtc_cmos 00:04: setting system clock to 2024-09-04T17:39:31 UTC (1725471571) Sep 4 17:39:32.113069 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 17:39:32.113083 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:39:32.113094 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 4 17:39:32.113104 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:39:32.113118 kernel: Segment Routing with IPv6 Sep 4 17:39:32.113128 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:39:32.113137 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:39:32.113147 kernel: Key type dns_resolver registered Sep 4 17:39:32.113156 kernel: IPI shorthand broadcast: enabled Sep 4 17:39:32.113166 kernel: sched_clock: Marking stable (1052010027, 130213273)->(1185374560, -3151260) Sep 4 17:39:32.113176 kernel: registered taskstats version 1 Sep 4 17:39:32.113185 kernel: Loading compiled-in X.509 certificates Sep 4 17:39:32.113195 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:39:32.113207 kernel: Key type .fscrypt registered Sep 4 17:39:32.113217 kernel: Key type fscrypt-provisioning registered Sep 4 17:39:32.113227 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:39:32.113237 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:39:32.113247 kernel: ima: No architecture policies found Sep 4 17:39:32.113257 kernel: clk: Disabling unused clocks Sep 4 17:39:32.113267 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:39:32.113278 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:39:32.113288 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:39:32.113301 kernel: Run /init as init process Sep 4 17:39:32.113311 kernel: with arguments: Sep 4 17:39:32.113321 kernel: /init Sep 4 17:39:32.113330 kernel: with environment: Sep 4 17:39:32.113340 kernel: HOME=/ Sep 4 17:39:32.113350 kernel: TERM=linux Sep 4 17:39:32.113361 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:39:32.113375 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:39:32.113391 systemd[1]: Detected virtualization kvm. Sep 4 17:39:32.113402 systemd[1]: Detected architecture x86-64. Sep 4 17:39:32.113413 systemd[1]: Running in initrd. Sep 4 17:39:32.113423 systemd[1]: No hostname configured, using default hostname. Sep 4 17:39:32.113433 systemd[1]: Hostname set to . Sep 4 17:39:32.113444 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:39:32.113454 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:39:32.113465 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:39:32.113477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:39:32.113489 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:39:32.113499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:39:32.113510 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:39:32.114593 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:39:32.114606 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:39:32.114621 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:39:32.114631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:39:32.114642 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:39:32.114652 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:39:32.114662 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:39:32.114683 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:39:32.114696 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:39:32.114709 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:39:32.114721 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:39:32.114732 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:39:32.114742 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:39:32.114753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:39:32.114764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:39:32.114774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:39:32.114785 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:39:32.114798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:39:32.114809 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:39:32.114819 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:39:32.114830 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:39:32.114840 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:39:32.114851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:39:32.114862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:39:32.114872 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:39:32.114883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:39:32.114896 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:39:32.114907 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:39:32.114955 systemd-journald[183]: Collecting audit messages is disabled. Sep 4 17:39:32.114986 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:39:32.114998 systemd-journald[183]: Journal started Sep 4 17:39:32.115024 systemd-journald[183]: Runtime Journal (/run/log/journal/522243a3b84946979b3105e2c36ba3ff) is 4.9M, max 39.3M, 34.4M free. Sep 4 17:39:32.088253 systemd-modules-load[184]: Inserted module 'overlay' Sep 4 17:39:32.151737 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:39:32.151778 kernel: Bridge firewalling registered Sep 4 17:39:32.151792 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:39:32.132668 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 4 17:39:32.152563 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:39:32.153379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:39:32.160669 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:39:32.162654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:39:32.165687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:39:32.174715 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:39:32.186625 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:39:32.193337 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:39:32.195576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:39:32.204677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:39:32.205368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:39:32.207650 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:39:32.228227 dracut-cmdline[218]: dracut-dracut-053 Sep 4 17:39:32.233297 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:39:32.244309 systemd-resolved[217]: Positive Trust Anchors: Sep 4 17:39:32.244328 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:39:32.244372 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:39:32.252457 systemd-resolved[217]: Defaulting to hostname 'linux'. Sep 4 17:39:32.256930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:39:32.257969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:39:32.322575 kernel: SCSI subsystem initialized Sep 4 17:39:32.335627 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:39:32.353587 kernel: iscsi: registered transport (tcp) Sep 4 17:39:32.382716 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:39:32.382893 kernel: QLogic iSCSI HBA Driver Sep 4 17:39:32.442274 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:39:32.449793 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:39:32.494051 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:39:32.494137 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:39:32.494152 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:39:32.566706 kernel: raid6: sse2x4 gen() 8581 MB/s Sep 4 17:39:32.583670 kernel: raid6: sse2x2 gen() 13892 MB/s Sep 4 17:39:32.600740 kernel: raid6: sse2x1 gen() 9603 MB/s Sep 4 17:39:32.600800 kernel: raid6: using algorithm sse2x2 gen() 13892 MB/s Sep 4 17:39:32.618898 kernel: raid6: .... xor() 8994 MB/s, rmw enabled Sep 4 17:39:32.618992 kernel: raid6: using ssse3x2 recovery algorithm Sep 4 17:39:32.648860 kernel: xor: measuring software checksum speed Sep 4 17:39:32.648937 kernel: prefetch64-sse : 17391 MB/sec Sep 4 17:39:32.652046 kernel: generic_sse : 15806 MB/sec Sep 4 17:39:32.652115 kernel: xor: using function: prefetch64-sse (17391 MB/sec) Sep 4 17:39:32.879813 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:39:32.919781 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:39:32.926694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:39:32.977762 systemd-udevd[402]: Using default interface naming scheme 'v255'. Sep 4 17:39:32.988925 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:39:33.000798 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:39:33.029019 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Sep 4 17:39:33.075984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:39:33.096831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:39:33.164314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:39:33.178816 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:39:33.230827 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:39:33.232700 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:39:33.235670 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:39:33.236184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:39:33.243954 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:39:33.280600 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Sep 4 17:39:33.286052 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Sep 4 17:39:33.284661 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:39:33.300553 kernel: libata version 3.00 loaded. Sep 4 17:39:33.300604 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:39:33.302782 kernel: GPT:17805311 != 41943039 Sep 4 17:39:33.302805 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:39:33.303916 kernel: GPT:17805311 != 41943039 Sep 4 17:39:33.304677 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:39:33.306663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:39:33.306685 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:39:33.311556 kernel: scsi host0: ata_piix Sep 4 17:39:33.314815 kernel: scsi host1: ata_piix Sep 4 17:39:33.315013 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Sep 4 17:39:33.315028 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Sep 4 17:39:33.313480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:39:33.313689 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:39:33.317277 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:39:33.318044 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:39:33.318185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:39:33.319393 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:39:33.326836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:39:33.357571 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Sep 4 17:39:33.361285 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (455) Sep 4 17:39:33.379629 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:39:33.396150 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:39:33.397013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:39:33.403864 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:39:33.408574 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:39:33.409178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:39:33.416745 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:39:33.419728 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:39:33.435694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:39:33.438648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:39:33.438710 disk-uuid[499]: Primary Header is updated. Sep 4 17:39:33.438710 disk-uuid[499]: Secondary Entries is updated. Sep 4 17:39:33.438710 disk-uuid[499]: Secondary Header is updated. Sep 4 17:39:34.458640 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:39:34.461610 disk-uuid[509]: The operation has completed successfully. Sep 4 17:39:34.531593 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:39:34.531932 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:39:34.558648 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:39:34.579226 sh[522]: Success Sep 4 17:39:34.604567 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Sep 4 17:39:34.702429 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:39:34.725953 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:39:34.732759 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:39:34.763544 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:39:34.763634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:39:34.767140 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:39:34.779456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:39:34.782239 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:39:34.797092 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:39:34.798235 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:39:34.807685 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:39:34.814717 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:39:34.827474 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:39:34.827600 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:39:34.827634 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:39:34.836607 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:39:34.859214 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:39:34.858445 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:39:34.872925 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:39:34.882984 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:39:34.931119 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:39:34.939794 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:39:34.965074 systemd-networkd[705]: lo: Link UP Sep 4 17:39:34.965085 systemd-networkd[705]: lo: Gained carrier Sep 4 17:39:34.966706 systemd-networkd[705]: Enumeration completed Sep 4 17:39:34.966832 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:39:34.967283 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:39:34.967287 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:39:34.967457 systemd[1]: Reached target network.target - Network. Sep 4 17:39:34.968457 systemd-networkd[705]: eth0: Link UP Sep 4 17:39:34.968461 systemd-networkd[705]: eth0: Gained carrier Sep 4 17:39:34.968468 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:39:34.982569 systemd-networkd[705]: eth0: DHCPv4 address 172.24.4.44/24, gateway 172.24.4.1 acquired from 172.24.4.1 Sep 4 17:39:35.045017 ignition[640]: Ignition 2.18.0 Sep 4 17:39:35.045037 ignition[640]: Stage: fetch-offline Sep 4 17:39:35.045099 ignition[640]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:39:35.047069 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:39:35.045116 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:39:35.045322 ignition[640]: parsed url from cmdline: "" Sep 4 17:39:35.045328 ignition[640]: no config URL provided Sep 4 17:39:35.045337 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:39:35.045351 ignition[640]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:39:35.045359 ignition[640]: failed to fetch config: resource requires networking Sep 4 17:39:35.045661 ignition[640]: Ignition finished successfully Sep 4 17:39:35.055797 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:39:35.073778 ignition[714]: Ignition 2.18.0 Sep 4 17:39:35.073793 ignition[714]: Stage: fetch Sep 4 17:39:35.074026 ignition[714]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:39:35.074039 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:39:35.074145 ignition[714]: parsed url from cmdline: "" Sep 4 17:39:35.074149 ignition[714]: no config URL provided Sep 4 17:39:35.074154 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:39:35.074163 ignition[714]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:39:35.074284 ignition[714]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 4 17:39:35.074442 ignition[714]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 4 17:39:35.074469 ignition[714]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 4 17:39:35.242154 ignition[714]: GET result: OK Sep 4 17:39:35.242857 ignition[714]: parsing config with SHA512: be550cc7998bba38c0d17360c52fbfed9a11387b381a4719ff75a4234bb36b2314da6cb530ae99c4452116a9371938457e8febe1f29eb61841aaf5baeed18e18 Sep 4 17:39:35.249092 unknown[714]: fetched base config from "system" Sep 4 17:39:35.249113 unknown[714]: fetched base config from "system" Sep 4 17:39:35.250226 ignition[714]: fetch: fetch complete Sep 4 17:39:35.249124 unknown[714]: fetched user config from "openstack" Sep 4 17:39:35.250236 ignition[714]: fetch: fetch passed Sep 4 17:39:35.253937 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:39:35.250308 ignition[714]: Ignition finished successfully Sep 4 17:39:35.261753 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:39:35.303188 ignition[721]: Ignition 2.18.0 Sep 4 17:39:35.303218 ignition[721]: Stage: kargs Sep 4 17:39:35.303686 ignition[721]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:39:35.303713 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:39:35.306076 ignition[721]: kargs: kargs passed Sep 4 17:39:35.309159 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:39:35.306190 ignition[721]: Ignition finished successfully Sep 4 17:39:35.320977 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:39:35.347299 ignition[728]: Ignition 2.18.0 Sep 4 17:39:35.348879 ignition[728]: Stage: disks Sep 4 17:39:35.349300 ignition[728]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:39:35.349323 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:39:35.354757 ignition[728]: disks: disks passed Sep 4 17:39:35.355881 ignition[728]: Ignition finished successfully Sep 4 17:39:35.357668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:39:35.360245 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:39:35.362034 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:39:35.364822 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:39:35.367154 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:39:35.369665 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:39:35.382003 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:39:35.419115 systemd-fsck[737]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 4 17:39:35.427945 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:39:35.434758 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:39:35.620906 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:39:35.621497 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:39:35.622758 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:39:35.633650 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:39:35.637630 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:39:35.639210 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:39:35.646761 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 4 17:39:35.649277 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:39:35.649313 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:39:35.654392 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:39:35.667024 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:39:35.695412 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (745) Sep 4 17:39:35.703448 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:39:35.703504 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:39:35.703530 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:39:35.716552 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:39:35.731361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:39:35.779160 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:39:35.792131 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:39:35.797587 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:39:35.803587 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:39:35.904142 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:39:35.920625 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:39:35.923657 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:39:35.936788 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:39:35.939563 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:39:35.966159 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:39:35.968924 ignition[862]: INFO : Ignition 2.18.0 Sep 4 17:39:35.968924 ignition[862]: INFO : Stage: mount Sep 4 17:39:35.970125 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:39:35.970125 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:39:35.970125 ignition[862]: INFO : mount: mount passed Sep 4 17:39:35.972634 ignition[862]: INFO : Ignition finished successfully Sep 4 17:39:35.971213 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:39:36.435958 systemd-networkd[705]: eth0: Gained IPv6LL Sep 4 17:39:42.876300 coreos-metadata[747]: Sep 04 17:39:42.876 WARN failed to locate config-drive, using the metadata service API instead Sep 4 17:39:42.913485 coreos-metadata[747]: Sep 04 17:39:42.913 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 4 17:39:42.927036 coreos-metadata[747]: Sep 04 17:39:42.926 INFO Fetch successful Sep 4 17:39:42.927036 coreos-metadata[747]: Sep 04 17:39:42.926 INFO wrote hostname ci-3975-2-1-d-945344e89d.novalocal to /sysroot/etc/hostname Sep 4 17:39:42.930382 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 4 17:39:42.930621 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 4 17:39:42.948780 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:39:42.969900 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:39:42.985577 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (879) Sep 4 17:39:42.991572 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:39:42.991662 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:39:42.995972 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:39:43.004605 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:39:43.010682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:39:43.058308 ignition[896]: INFO : Ignition 2.18.0 Sep 4 17:39:43.062209 ignition[896]: INFO : Stage: files Sep 4 17:39:43.062209 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:39:43.062209 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:39:43.068914 ignition[896]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:39:43.071140 ignition[896]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:39:43.071140 ignition[896]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:39:43.077328 ignition[896]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:39:43.079744 ignition[896]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:39:43.082324 ignition[896]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:39:43.079931 unknown[896]: wrote ssh authorized keys file for user: core Sep 4 17:39:43.091634 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:39:43.091634 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:39:43.796846 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:39:44.139031 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:39:44.139031 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:39:44.144664 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 17:39:44.683883 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:39:45.131242 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 17:39:45.131242 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:39:45.131242 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:39:45.131242 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:39:45.131242 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:39:45.131242 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:39:45.145141 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:39:45.616407 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:39:47.297086 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:39:47.297086 ignition[896]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 17:39:47.301934 ignition[896]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:39:47.301934 ignition[896]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:39:47.301934 ignition[896]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 17:39:47.301934 ignition[896]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:39:47.301934 ignition[896]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:39:47.301934 ignition[896]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:39:47.301934 ignition[896]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:39:47.301934 ignition[896]: INFO : files: files passed Sep 4 17:39:47.301934 ignition[896]: INFO : Ignition finished successfully Sep 4 17:39:47.302155 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:39:47.314792 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:39:47.317340 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:39:47.332178 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:39:47.338009 initrd-setup-root-after-ignition[926]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:39:47.338009 initrd-setup-root-after-ignition[926]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:39:47.332300 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:39:47.344503 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:39:47.339084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:39:47.340926 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:39:47.350947 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:39:47.385058 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:39:47.385203 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:39:47.387348 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:39:47.389367 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:39:47.391413 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:39:47.396778 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:39:47.431360 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:39:47.438794 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:39:47.454080 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:39:47.455794 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:39:47.457111 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:39:47.458816 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:39:47.459010 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:39:47.460824 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:39:47.461706 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:39:47.463386 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:39:47.464806 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:39:47.466254 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:39:47.467958 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:39:47.469652 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:39:47.471386 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:39:47.473053 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:39:47.474796 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:39:47.476289 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:39:47.476430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:39:47.478264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:39:47.479197 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:39:47.480676 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:39:47.481314 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:39:47.482505 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:39:47.482718 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:39:47.485033 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:39:47.485171 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:39:47.486009 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:39:47.486124 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:39:47.498207 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:39:47.501830 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:39:47.502399 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:39:47.502620 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:39:47.506558 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:39:47.507231 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:39:47.517421 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:39:47.517580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:39:47.522063 ignition[952]: INFO : Ignition 2.18.0 Sep 4 17:39:47.522063 ignition[952]: INFO : Stage: umount Sep 4 17:39:47.524596 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:39:47.524596 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 4 17:39:47.527610 ignition[952]: INFO : umount: umount passed Sep 4 17:39:47.527610 ignition[952]: INFO : Ignition finished successfully Sep 4 17:39:47.528066 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:39:47.529578 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:39:47.531803 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:39:47.531864 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:39:47.533611 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:39:47.533658 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:39:47.534189 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:39:47.534233 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:39:47.535697 systemd[1]: Stopped target network.target - Network. Sep 4 17:39:47.536415 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:39:47.536464 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:39:47.539615 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:39:47.540276 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:39:47.545572 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:39:47.546165 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:39:47.546648 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:39:47.548042 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:39:47.548107 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:39:47.548996 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:39:47.549029 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:39:47.549998 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:39:47.550048 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:39:47.551015 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:39:47.551058 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:39:47.552150 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:39:47.553373 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:39:47.555563 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:39:47.556165 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:39:47.556250 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:39:47.556559 systemd-networkd[705]: eth0: DHCPv6 lease lost Sep 4 17:39:47.558536 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:39:47.558629 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:39:47.560341 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:39:47.560397 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:39:47.562826 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:39:47.562887 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:39:47.570952 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:39:47.571486 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:39:47.571578 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:39:47.572769 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:39:47.574889 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:39:47.574993 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:39:47.582898 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:39:47.583071 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:39:47.596964 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:39:47.597048 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:39:47.598675 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:39:47.598716 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:39:47.599793 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:39:47.599844 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:39:47.601471 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:39:47.601569 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:39:47.602665 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:39:47.602709 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:39:47.609754 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:39:47.612915 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:39:47.613005 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:39:47.614195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:39:47.614244 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:39:47.617458 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:39:47.617545 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:39:47.618840 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:39:47.618887 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:39:47.620215 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:39:47.620266 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:39:47.621431 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:39:47.621477 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:39:47.622882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:39:47.622927 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:39:47.624287 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:39:47.624380 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:39:47.625358 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:39:47.625442 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:39:47.626920 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:39:47.634760 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:39:47.645209 systemd[1]: Switching root. Sep 4 17:39:47.674568 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 4 17:39:47.674680 systemd-journald[183]: Journal stopped Sep 4 17:39:49.669558 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:39:49.669642 kernel: SELinux: policy capability open_perms=1 Sep 4 17:39:49.669658 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:39:49.669672 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:39:49.669686 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:39:49.669705 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:39:49.669719 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:39:49.669732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:39:49.669749 kernel: audit: type=1403 audit(1725471588.552:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:39:49.669766 systemd[1]: Successfully loaded SELinux policy in 77.700ms. Sep 4 17:39:49.669811 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.631ms. Sep 4 17:39:49.669829 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:39:49.669845 systemd[1]: Detected virtualization kvm. Sep 4 17:39:49.669860 systemd[1]: Detected architecture x86-64. Sep 4 17:39:49.669875 systemd[1]: Detected first boot. Sep 4 17:39:49.669890 systemd[1]: Hostname set to . Sep 4 17:39:49.669908 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:39:49.669924 zram_generator::config[996]: No configuration found. Sep 4 17:39:49.669944 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:39:49.669959 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:39:49.669974 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:39:49.669989 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:39:49.670005 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:39:49.670019 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:39:49.670034 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:39:49.670052 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:39:49.670067 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:39:49.670081 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:39:49.670096 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:39:49.670111 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:39:49.670125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:39:49.670144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:39:49.670158 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:39:49.670176 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:39:49.670332 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:39:49.670348 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:39:49.670363 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:39:49.670378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:39:49.670393 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:39:49.670414 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:39:49.670430 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:39:49.670448 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:39:49.670463 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:39:49.670478 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:39:49.670493 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:39:49.670507 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:39:49.670722 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:39:49.670742 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:39:49.670757 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:39:49.670776 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:39:49.670791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:39:49.670805 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:39:49.670820 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:39:49.670835 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:39:49.670852 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:39:49.670867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:39:49.670882 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:39:49.670896 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:39:49.670914 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:39:49.670931 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:39:49.670946 systemd[1]: Reached target machines.target - Containers. Sep 4 17:39:49.670961 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:39:49.670976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:39:49.670991 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:39:49.671006 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:39:49.671021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:39:49.671038 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:39:49.671058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:39:49.671074 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:39:49.671089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:39:49.671104 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:39:49.671119 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:39:49.671133 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:39:49.671148 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:39:49.671165 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:39:49.671182 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:39:49.671196 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:39:49.671214 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:39:49.671229 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:39:49.671243 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:39:49.671258 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:39:49.671273 systemd[1]: Stopped verity-setup.service. Sep 4 17:39:49.671288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:39:49.671304 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:39:49.671319 kernel: loop: module loaded Sep 4 17:39:49.671333 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:39:49.671347 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:39:49.671362 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:39:49.671379 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:39:49.671395 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:39:49.671438 systemd-journald[1091]: Collecting audit messages is disabled. Sep 4 17:39:49.671468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:39:49.671482 kernel: fuse: init (API version 7.39) Sep 4 17:39:49.671495 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:39:49.671510 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:39:49.671544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:39:49.671563 systemd-journald[1091]: Journal started Sep 4 17:39:49.671593 systemd-journald[1091]: Runtime Journal (/run/log/journal/522243a3b84946979b3105e2c36ba3ff) is 4.9M, max 39.3M, 34.4M free. Sep 4 17:39:49.320168 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:39:49.349281 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:39:49.349769 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:39:49.672706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:39:49.675801 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:39:49.676970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:39:49.677163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:39:49.677963 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:39:49.678145 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:39:49.678919 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:39:49.679104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:39:49.681003 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:39:49.689386 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:39:49.698611 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:39:49.705650 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:39:49.706625 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:39:49.712725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:39:49.715379 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:39:49.716218 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:39:49.732692 kernel: ACPI: bus type drm_connector registered Sep 4 17:39:49.733905 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:39:49.734433 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:39:49.737954 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:39:49.738738 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:39:49.738778 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:39:49.740562 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:39:49.748665 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:39:49.754640 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:39:49.755351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:39:49.828918 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:39:49.842414 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:39:49.843969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:39:49.846870 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:39:49.853675 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:39:49.855236 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:39:49.856037 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:39:49.856762 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:39:49.857497 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:39:49.867848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:39:49.875820 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:39:49.888899 systemd-journald[1091]: Time spent on flushing to /var/log/journal/522243a3b84946979b3105e2c36ba3ff is 58.135ms for 942 entries. Sep 4 17:39:49.888899 systemd-journald[1091]: System Journal (/var/log/journal/522243a3b84946979b3105e2c36ba3ff) is 8.0M, max 584.8M, 576.8M free. Sep 4 17:39:50.204064 systemd-journald[1091]: Received client request to flush runtime journal. Sep 4 17:39:50.204133 kernel: loop0: detected capacity change from 0 to 80568 Sep 4 17:39:50.204159 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:39:49.890161 systemd-tmpfiles[1118]: ACLs are not supported, ignoring. Sep 4 17:39:49.890565 systemd-tmpfiles[1118]: ACLs are not supported, ignoring. Sep 4 17:39:49.898162 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:39:49.904722 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:39:49.907439 udevadm[1135]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:39:50.078771 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:39:50.081188 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:39:50.091937 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:39:50.104793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:39:50.155786 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:39:50.163836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:39:50.208853 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:39:50.211891 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Sep 4 17:39:50.211905 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Sep 4 17:39:50.219861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:39:50.226661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:39:50.230632 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:39:50.231034 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:39:50.261634 kernel: loop1: detected capacity change from 0 to 8 Sep 4 17:39:50.279547 kernel: loop2: detected capacity change from 0 to 211296 Sep 4 17:39:50.345737 kernel: loop3: detected capacity change from 0 to 139904 Sep 4 17:39:50.448555 kernel: loop4: detected capacity change from 0 to 80568 Sep 4 17:39:50.510756 kernel: loop5: detected capacity change from 0 to 8 Sep 4 17:39:50.515989 kernel: loop6: detected capacity change from 0 to 211296 Sep 4 17:39:50.570541 kernel: loop7: detected capacity change from 0 to 139904 Sep 4 17:39:50.624911 (sd-merge)[1157]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 4 17:39:50.626754 (sd-merge)[1157]: Merged extensions into '/usr'. Sep 4 17:39:50.646839 systemd[1]: Reloading requested from client PID 1129 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:39:50.646864 systemd[1]: Reloading... Sep 4 17:39:50.753547 zram_generator::config[1181]: No configuration found. Sep 4 17:39:50.913434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:39:50.976484 systemd[1]: Reloading finished in 328 ms. Sep 4 17:39:51.006204 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:39:51.016191 systemd[1]: Starting ensure-sysext.service... Sep 4 17:39:51.022293 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:39:51.024929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:39:51.036735 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:39:51.047162 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:39:51.047179 systemd[1]: Reloading... Sep 4 17:39:51.056878 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:39:51.057748 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:39:51.059126 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:39:51.060416 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Sep 4 17:39:51.060569 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Sep 4 17:39:51.069920 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:39:51.069934 systemd-tmpfiles[1237]: Skipping /boot Sep 4 17:39:51.089374 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:39:51.089392 systemd-tmpfiles[1237]: Skipping /boot Sep 4 17:39:51.110132 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Sep 4 17:39:51.138629 zram_generator::config[1265]: No configuration found. Sep 4 17:39:51.215263 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:39:51.309980 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1301) Sep 4 17:39:51.333578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1286) Sep 4 17:39:51.347578 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 17:39:51.365559 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:39:51.382551 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 17:39:51.405554 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 4 17:39:51.430270 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:39:51.472672 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 4 17:39:51.472769 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 4 17:39:51.479561 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:39:51.482545 kernel: Console: switching to colour dummy device 80x25 Sep 4 17:39:51.484577 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 4 17:39:51.484654 kernel: [drm] features: -context_init Sep 4 17:39:51.486542 kernel: [drm] number of scanouts: 1 Sep 4 17:39:51.486581 kernel: [drm] number of cap sets: 0 Sep 4 17:39:51.490530 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 4 17:39:51.498548 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 4 17:39:51.498623 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:39:51.500542 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 4 17:39:51.529960 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:39:51.530362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:39:51.532325 systemd[1]: Reloading finished in 484 ms. Sep 4 17:39:51.550312 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:39:51.551647 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:39:51.563432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:39:51.589628 systemd[1]: Finished ensure-sysext.service. Sep 4 17:39:51.605282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:39:51.611878 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:39:51.621904 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:39:51.622424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:39:51.625621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:39:51.630099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:39:51.635908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:39:51.646961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:39:51.649906 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:39:51.652647 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:39:51.663898 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:39:51.676914 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:39:51.683263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:39:51.696062 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:39:51.710152 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:39:51.714362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:39:51.715213 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:39:51.717167 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:39:51.717907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:39:51.718423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:39:51.726257 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:39:51.726435 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:39:51.728229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:39:51.730815 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:39:51.737638 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:39:51.738581 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:39:51.740686 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:39:51.759771 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:39:51.760773 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:39:51.760884 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:39:51.771176 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:39:51.774923 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:39:51.776682 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:39:51.779968 augenrules[1390]: No rules Sep 4 17:39:51.783980 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:39:51.800956 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:39:51.801656 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:39:51.806644 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:39:51.814674 lvm[1386]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:39:51.831011 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:39:51.840349 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:39:51.844205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:39:51.852737 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:39:51.856975 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:39:51.860212 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:39:51.939958 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:39:51.993505 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:39:51.994381 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:39:52.023710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:39:52.031014 systemd-networkd[1367]: lo: Link UP Sep 4 17:39:52.031031 systemd-networkd[1367]: lo: Gained carrier Sep 4 17:39:52.032282 systemd-networkd[1367]: Enumeration completed Sep 4 17:39:52.032382 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:39:52.033430 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:39:52.033439 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:39:52.036957 systemd-networkd[1367]: eth0: Link UP Sep 4 17:39:52.036974 systemd-networkd[1367]: eth0: Gained carrier Sep 4 17:39:52.037017 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:39:52.038250 systemd-resolved[1369]: Positive Trust Anchors: Sep 4 17:39:52.038277 systemd-resolved[1369]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:39:52.038359 systemd-resolved[1369]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:39:52.041360 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:39:52.045366 systemd-resolved[1369]: Using system hostname 'ci-3975-2-1-d-945344e89d.novalocal'. Sep 4 17:39:52.049532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:39:52.052194 systemd[1]: Reached target network.target - Network. Sep 4 17:39:52.053210 systemd-networkd[1367]: eth0: DHCPv4 address 172.24.4.44/24, gateway 172.24.4.1 acquired from 172.24.4.1 Sep 4 17:39:52.054111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:39:52.054334 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Sep 4 17:39:52.056213 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:39:52.056856 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:39:52.057337 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:39:52.059832 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:39:52.061194 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:39:52.062693 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:39:52.064095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:39:52.064228 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:39:52.065565 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:39:52.068177 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:39:52.073673 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:39:52.079880 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:39:52.082340 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:39:52.084506 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:39:52.085008 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:39:52.085494 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:39:52.087240 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:39:52.095604 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:39:52.097927 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:39:52.105731 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:39:52.117626 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:39:52.121207 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:39:52.122361 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:39:52.126661 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:39:52.132646 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:39:52.137812 jq[1423]: false Sep 4 17:39:52.141603 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:39:52.145708 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:39:52.156691 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:39:52.159421 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:39:52.161011 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:39:52.161734 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:39:52.164676 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:39:52.167930 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:39:52.169583 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:39:52.176477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:39:52.176723 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:39:52.184568 extend-filesystems[1424]: Found loop4 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found loop5 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found loop6 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found loop7 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found vda Sep 4 17:39:52.184568 extend-filesystems[1424]: Found vda1 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found vda2 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found vda3 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found usr Sep 4 17:39:52.184568 extend-filesystems[1424]: Found vda4 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found vda6 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found vda7 Sep 4 17:39:52.184568 extend-filesystems[1424]: Found vda9 Sep 4 17:39:52.184568 extend-filesystems[1424]: Checking size of /dev/vda9 Sep 4 17:39:52.189487 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:39:52.189092 dbus-daemon[1420]: [system] SELinux support is enabled Sep 4 17:39:52.199200 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:39:52.199227 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:39:52.262477 update_engine[1431]: I0904 17:39:52.218026 1431 main.cc:92] Flatcar Update Engine starting Sep 4 17:39:52.262477 update_engine[1431]: I0904 17:39:52.233338 1431 update_check_scheduler.cc:74] Next update check in 5m27s Sep 4 17:39:52.201896 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:39:52.262831 jq[1433]: true Sep 4 17:39:52.201916 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:39:52.208714 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:39:52.208920 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:39:52.232688 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:39:52.243236 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:39:52.259683 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:39:52.272818 extend-filesystems[1424]: Resized partition /dev/vda9 Sep 4 17:39:52.279257 tar[1436]: linux-amd64/helm Sep 4 17:39:52.287728 extend-filesystems[1459]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:39:52.319666 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1300) Sep 4 17:39:52.319700 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Sep 4 17:39:52.319716 jq[1452]: true Sep 4 17:39:52.312273 systemd-logind[1430]: New seat seat0. Sep 4 17:39:52.327098 systemd-logind[1430]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:39:52.327124 systemd-logind[1430]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:39:52.329653 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:39:52.465722 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:39:52.520022 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Sep 4 17:39:52.625839 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:39:52.624933 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:39:52.626320 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:39:52.626320 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 3 Sep 4 17:39:52.626320 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Sep 4 17:39:52.627668 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:39:52.647962 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Sep 4 17:39:52.649332 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:39:52.675077 systemd[1]: Starting sshkeys.service... Sep 4 17:39:52.702605 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:39:52.728925 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:39:52.808106 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:39:52.901476 containerd[1451]: time="2024-09-04T17:39:52.901323266Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:39:52.925300 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:39:52.948018 containerd[1451]: time="2024-09-04T17:39:52.947411575Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:39:52.948018 containerd[1451]: time="2024-09-04T17:39:52.947464975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:39:52.949645 containerd[1451]: time="2024-09-04T17:39:52.949611651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.949716879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.949942401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.949963711Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.950098845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.950163827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.950180488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.950256450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.950456486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.950477535Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:39:52.950544 containerd[1451]: time="2024-09-04T17:39:52.950489738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950879 containerd[1451]: time="2024-09-04T17:39:52.950857378Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:39:52.950951 containerd[1451]: time="2024-09-04T17:39:52.950935995Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:39:52.951063 containerd[1451]: time="2024-09-04T17:39:52.951044108Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:39:52.951120 containerd[1451]: time="2024-09-04T17:39:52.951107707Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:39:52.965022 containerd[1451]: time="2024-09-04T17:39:52.964980135Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:39:52.965022 containerd[1451]: time="2024-09-04T17:39:52.965026351Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:39:52.965022 containerd[1451]: time="2024-09-04T17:39:52.965044475Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:39:52.965276 containerd[1451]: time="2024-09-04T17:39:52.965085222Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:39:52.965276 containerd[1451]: time="2024-09-04T17:39:52.965105951Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:39:52.965276 containerd[1451]: time="2024-09-04T17:39:52.965120258Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:39:52.965276 containerd[1451]: time="2024-09-04T17:39:52.965136518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965278314Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965300355Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965314121Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965329410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965345039Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965363133Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965378933Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965393119Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:39:52.965416 containerd[1451]: time="2024-09-04T17:39:52.965408488Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:39:52.965686 containerd[1451]: time="2024-09-04T17:39:52.965423596Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:39:52.965686 containerd[1451]: time="2024-09-04T17:39:52.965438595Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:39:52.965686 containerd[1451]: time="2024-09-04T17:39:52.965453032Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:39:52.966903 containerd[1451]: time="2024-09-04T17:39:52.966877193Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:39:52.967217 containerd[1451]: time="2024-09-04T17:39:52.967150405Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:39:52.967217 containerd[1451]: time="2024-09-04T17:39:52.967186483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967217 containerd[1451]: time="2024-09-04T17:39:52.967203515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967229333Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967293383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967310095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967324452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967340502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967354217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967369456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967383462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967397388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.967470 containerd[1451]: time="2024-09-04T17:39:52.967413008Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:39:52.968967 containerd[1451]: time="2024-09-04T17:39:52.968941795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.968967 containerd[1451]: time="2024-09-04T17:39:52.968972422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.969049 containerd[1451]: time="2024-09-04T17:39:52.968991799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.969049 containerd[1451]: time="2024-09-04T17:39:52.969006967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.969049 containerd[1451]: time="2024-09-04T17:39:52.969021594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.969049 containerd[1451]: time="2024-09-04T17:39:52.969041342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.969135 containerd[1451]: time="2024-09-04T17:39:52.969056159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.969135 containerd[1451]: time="2024-09-04T17:39:52.969069765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:39:52.969703 containerd[1451]: time="2024-09-04T17:39:52.969384485Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:39:52.969703 containerd[1451]: time="2024-09-04T17:39:52.969459766Z" level=info msg="Connect containerd service" Sep 4 17:39:52.969703 containerd[1451]: time="2024-09-04T17:39:52.969486917Z" level=info msg="using legacy CRI server" Sep 4 17:39:52.969703 containerd[1451]: time="2024-09-04T17:39:52.969494271Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:39:52.971075 containerd[1451]: time="2024-09-04T17:39:52.971046923Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.971830673Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.971887199Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.971912917Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.971927625Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.971943925Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972324549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972376046Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972408326Z" level=info msg="Start subscribing containerd event" Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972450395Z" level=info msg="Start recovering state" Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972503936Z" level=info msg="Start event monitor" Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972536667Z" level=info msg="Start snapshots syncer" Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972548449Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972557015Z" level=info msg="Start streaming server" Sep 4 17:39:52.976672 containerd[1451]: time="2024-09-04T17:39:52.972603292Z" level=info msg="containerd successfully booted in 0.074987s" Sep 4 17:39:52.973663 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:39:52.974827 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:39:52.984852 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:39:52.992994 systemd[1]: Started sshd@0-172.24.4.44:22-172.24.4.1:32884.service - OpenSSH per-connection server daemon (172.24.4.1:32884). Sep 4 17:39:53.006623 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:39:53.007899 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:39:53.027413 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:39:53.043811 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:39:53.053986 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:39:53.058505 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:39:53.060477 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:39:53.175674 tar[1436]: linux-amd64/LICENSE Sep 4 17:39:53.175901 tar[1436]: linux-amd64/README.md Sep 4 17:39:53.190180 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:39:53.331927 systemd-networkd[1367]: eth0: Gained IPv6LL Sep 4 17:39:53.333197 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Sep 4 17:39:53.336234 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:39:53.342255 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:39:53.359080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:39:53.375747 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:39:53.435592 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:39:54.518083 sshd[1506]: Accepted publickey for core from 172.24.4.1 port 32884 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:39:54.523775 sshd[1506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:39:54.554087 systemd-logind[1430]: New session 1 of user core. Sep 4 17:39:54.557681 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:39:54.575839 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:39:54.615905 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:39:54.631255 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:39:54.652341 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:39:54.780409 systemd[1535]: Queued start job for default target default.target. Sep 4 17:39:54.787817 systemd[1535]: Created slice app.slice - User Application Slice. Sep 4 17:39:54.787920 systemd[1535]: Reached target paths.target - Paths. Sep 4 17:39:54.787998 systemd[1535]: Reached target timers.target - Timers. Sep 4 17:39:54.789510 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:39:54.812494 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:39:54.813239 systemd[1535]: Reached target sockets.target - Sockets. Sep 4 17:39:54.813258 systemd[1535]: Reached target basic.target - Basic System. Sep 4 17:39:54.813311 systemd[1535]: Reached target default.target - Main User Target. Sep 4 17:39:54.813346 systemd[1535]: Startup finished in 154ms. Sep 4 17:39:54.813385 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:39:54.821797 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:39:55.086909 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:39:55.087138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:39:55.291194 systemd[1]: Started sshd@1-172.24.4.44:22-172.24.4.1:36582.service - OpenSSH per-connection server daemon (172.24.4.1:36582). Sep 4 17:39:57.165834 sshd[1553]: Accepted publickey for core from 172.24.4.1 port 36582 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:39:57.168148 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:39:57.178723 systemd-logind[1430]: New session 2 of user core. Sep 4 17:39:57.184264 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:39:57.282135 kubelet[1549]: E0904 17:39:57.281929 1549 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:39:57.286837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:39:57.287196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:39:57.288173 systemd[1]: kubelet.service: Consumed 2.197s CPU time. Sep 4 17:39:57.812897 sshd[1553]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:57.824850 systemd[1]: sshd@1-172.24.4.44:22-172.24.4.1:36582.service: Deactivated successfully. Sep 4 17:39:57.828917 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:39:57.830945 systemd-logind[1430]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:39:57.838392 systemd[1]: Started sshd@2-172.24.4.44:22-172.24.4.1:36596.service - OpenSSH per-connection server daemon (172.24.4.1:36596). Sep 4 17:39:57.844691 systemd-logind[1430]: Removed session 2. Sep 4 17:39:58.295862 login[1513]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:39:58.300988 login[1514]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:39:58.305318 systemd-logind[1430]: New session 4 of user core. Sep 4 17:39:58.316461 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:39:58.322903 systemd-logind[1430]: New session 3 of user core. Sep 4 17:39:58.330038 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:39:59.142460 sshd[1569]: Accepted publickey for core from 172.24.4.1 port 36596 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:39:59.145006 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:39:59.151563 systemd-logind[1430]: New session 5 of user core. Sep 4 17:39:59.163919 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:39:59.201126 coreos-metadata[1419]: Sep 04 17:39:59.200 WARN failed to locate config-drive, using the metadata service API instead Sep 4 17:39:59.247939 coreos-metadata[1419]: Sep 04 17:39:59.247 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 4 17:39:59.451107 coreos-metadata[1419]: Sep 04 17:39:59.450 INFO Fetch successful Sep 4 17:39:59.451107 coreos-metadata[1419]: Sep 04 17:39:59.450 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 4 17:39:59.466802 coreos-metadata[1419]: Sep 04 17:39:59.466 INFO Fetch successful Sep 4 17:39:59.466802 coreos-metadata[1419]: Sep 04 17:39:59.466 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 4 17:39:59.482743 coreos-metadata[1419]: Sep 04 17:39:59.482 INFO Fetch successful Sep 4 17:39:59.482743 coreos-metadata[1419]: Sep 04 17:39:59.482 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 4 17:39:59.499916 coreos-metadata[1419]: Sep 04 17:39:59.499 INFO Fetch successful Sep 4 17:39:59.499916 coreos-metadata[1419]: Sep 04 17:39:59.499 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 4 17:39:59.516204 coreos-metadata[1419]: Sep 04 17:39:59.516 INFO Fetch successful Sep 4 17:39:59.516204 coreos-metadata[1419]: Sep 04 17:39:59.516 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 4 17:39:59.534708 coreos-metadata[1419]: Sep 04 17:39:59.534 INFO Fetch successful Sep 4 17:39:59.562697 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:39:59.564121 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:39:59.787842 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:59.794164 systemd[1]: sshd@2-172.24.4.44:22-172.24.4.1:36596.service: Deactivated successfully. Sep 4 17:39:59.797316 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:39:59.800326 systemd-logind[1430]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:39:59.802578 systemd-logind[1430]: Removed session 5. Sep 4 17:39:59.831110 coreos-metadata[1490]: Sep 04 17:39:59.830 WARN failed to locate config-drive, using the metadata service API instead Sep 4 17:39:59.872152 coreos-metadata[1490]: Sep 04 17:39:59.872 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 4 17:39:59.891151 coreos-metadata[1490]: Sep 04 17:39:59.891 INFO Fetch successful Sep 4 17:39:59.891151 coreos-metadata[1490]: Sep 04 17:39:59.891 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:39:59.909637 coreos-metadata[1490]: Sep 04 17:39:59.909 INFO Fetch successful Sep 4 17:39:59.969421 unknown[1490]: wrote ssh authorized keys file for user: core Sep 4 17:40:00.321016 update-ssh-keys[1603]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:40:00.322469 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:40:00.327354 systemd[1]: Finished sshkeys.service. Sep 4 17:40:00.332900 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:40:00.333294 systemd[1]: Startup finished in 1.273s (kernel) + 16.725s (initrd) + 11.857s (userspace) = 29.856s. Sep 4 17:40:07.538004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:40:07.547777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:07.973934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:07.979465 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:08.692066 kubelet[1615]: E0904 17:40:08.691863 1615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:40:08.702249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:40:08.702725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:40:09.813135 systemd[1]: Started sshd@3-172.24.4.44:22-172.24.4.1:32902.service - OpenSSH per-connection server daemon (172.24.4.1:32902). Sep 4 17:40:11.127936 sshd[1624]: Accepted publickey for core from 172.24.4.1 port 32902 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:40:11.130973 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:40:11.143122 systemd-logind[1430]: New session 6 of user core. Sep 4 17:40:11.155014 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:40:11.773844 sshd[1624]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:11.787735 systemd[1]: sshd@3-172.24.4.44:22-172.24.4.1:32902.service: Deactivated successfully. Sep 4 17:40:11.791903 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:40:11.796962 systemd-logind[1430]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:40:11.805175 systemd[1]: Started sshd@4-172.24.4.44:22-172.24.4.1:32908.service - OpenSSH per-connection server daemon (172.24.4.1:32908). Sep 4 17:40:11.808251 systemd-logind[1430]: Removed session 6. Sep 4 17:40:13.319723 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 32908 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:40:13.322420 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:40:13.333651 systemd-logind[1430]: New session 7 of user core. Sep 4 17:40:13.337153 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:40:13.846091 sshd[1631]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:13.861673 systemd[1]: sshd@4-172.24.4.44:22-172.24.4.1:32908.service: Deactivated successfully. Sep 4 17:40:13.863616 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:40:13.866472 systemd-logind[1430]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:40:13.873949 systemd[1]: Started sshd@5-172.24.4.44:22-172.24.4.1:32914.service - OpenSSH per-connection server daemon (172.24.4.1:32914). Sep 4 17:40:13.901422 systemd-logind[1430]: Removed session 7. Sep 4 17:40:15.126554 sshd[1638]: Accepted publickey for core from 172.24.4.1 port 32914 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:40:15.129263 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:40:15.140812 systemd-logind[1430]: New session 8 of user core. Sep 4 17:40:15.147857 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:40:15.773574 sshd[1638]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:15.784249 systemd[1]: sshd@5-172.24.4.44:22-172.24.4.1:32914.service: Deactivated successfully. Sep 4 17:40:15.787315 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:40:15.790919 systemd-logind[1430]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:40:15.799213 systemd[1]: Started sshd@6-172.24.4.44:22-172.24.4.1:41776.service - OpenSSH per-connection server daemon (172.24.4.1:41776). Sep 4 17:40:15.802694 systemd-logind[1430]: Removed session 8. Sep 4 17:40:17.126007 sshd[1645]: Accepted publickey for core from 172.24.4.1 port 41776 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:40:17.129510 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:40:17.143688 systemd-logind[1430]: New session 9 of user core. Sep 4 17:40:17.153923 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:40:17.561963 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:40:17.562925 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:40:17.586709 sudo[1648]: pam_unix(sudo:session): session closed for user root Sep 4 17:40:17.772437 sshd[1645]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:17.783773 systemd[1]: sshd@6-172.24.4.44:22-172.24.4.1:41776.service: Deactivated successfully. Sep 4 17:40:17.787973 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:40:17.790508 systemd-logind[1430]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:40:17.799461 systemd[1]: Started sshd@7-172.24.4.44:22-172.24.4.1:41788.service - OpenSSH per-connection server daemon (172.24.4.1:41788). Sep 4 17:40:17.803798 systemd-logind[1430]: Removed session 9. Sep 4 17:40:18.820059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:40:18.831148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:19.184080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:19.188244 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:19.226606 sshd[1653]: Accepted publickey for core from 172.24.4.1 port 41788 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:40:19.232575 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:40:19.243502 systemd-logind[1430]: New session 10 of user core. Sep 4 17:40:19.249957 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:40:19.376070 kubelet[1663]: E0904 17:40:19.375882 1663 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:40:19.381917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:40:19.382251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:40:19.647747 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:40:19.648498 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:40:19.655081 sudo[1673]: pam_unix(sudo:session): session closed for user root Sep 4 17:40:19.666937 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:40:19.667642 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:40:19.702678 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:40:19.708571 auditctl[1676]: No rules Sep 4 17:40:19.709479 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:40:19.710284 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:40:19.721766 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:40:19.796602 augenrules[1694]: No rules Sep 4 17:40:19.798334 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:40:19.802091 sudo[1672]: pam_unix(sudo:session): session closed for user root Sep 4 17:40:20.083128 sshd[1653]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:20.104180 systemd[1]: sshd@7-172.24.4.44:22-172.24.4.1:41788.service: Deactivated successfully. Sep 4 17:40:20.108588 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:40:20.110255 systemd-logind[1430]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:40:20.120272 systemd[1]: Started sshd@8-172.24.4.44:22-172.24.4.1:41796.service - OpenSSH per-connection server daemon (172.24.4.1:41796). Sep 4 17:40:20.123056 systemd-logind[1430]: Removed session 10. Sep 4 17:40:21.348730 sshd[1702]: Accepted publickey for core from 172.24.4.1 port 41796 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:40:21.352231 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:40:21.363421 systemd-logind[1430]: New session 11 of user core. Sep 4 17:40:21.372863 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:40:21.792970 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:40:21.795119 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:40:22.064597 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:40:22.066156 (dockerd)[1715]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:40:22.732678 dockerd[1715]: time="2024-09-04T17:40:22.732062680Z" level=info msg="Starting up" Sep 4 17:40:22.808172 systemd[1]: var-lib-docker-metacopy\x2dcheck153038465-merged.mount: Deactivated successfully. Sep 4 17:40:22.863021 dockerd[1715]: time="2024-09-04T17:40:22.862753050Z" level=info msg="Loading containers: start." Sep 4 17:40:23.394879 kernel: Initializing XFRM netlink socket Sep 4 17:40:23.435108 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Sep 4 17:40:23.467075 systemd-timesyncd[1370]: Contacted time server 5.39.80.51:123 (2.flatcar.pool.ntp.org). Sep 4 17:40:23.467412 systemd-timesyncd[1370]: Initial clock synchronization to Wed 2024-09-04 17:40:23.798267 UTC. Sep 4 17:40:23.520729 systemd-networkd[1367]: docker0: Link UP Sep 4 17:40:23.544252 dockerd[1715]: time="2024-09-04T17:40:23.544069950Z" level=info msg="Loading containers: done." Sep 4 17:40:23.670577 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3853008153-merged.mount: Deactivated successfully. Sep 4 17:40:23.680464 dockerd[1715]: time="2024-09-04T17:40:23.680334043Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:40:23.680670 dockerd[1715]: time="2024-09-04T17:40:23.680558284Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:40:23.680749 dockerd[1715]: time="2024-09-04T17:40:23.680673750Z" level=info msg="Daemon has completed initialization" Sep 4 17:40:23.753615 dockerd[1715]: time="2024-09-04T17:40:23.753457516Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:40:23.755841 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:40:25.594786 containerd[1451]: time="2024-09-04T17:40:25.594728873Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:40:26.450143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306923088.mount: Deactivated successfully. Sep 4 17:40:28.958572 containerd[1451]: time="2024-09-04T17:40:28.958431621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:28.961436 containerd[1451]: time="2024-09-04T17:40:28.961394923Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232957" Sep 4 17:40:28.962552 containerd[1451]: time="2024-09-04T17:40:28.962462273Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:28.966936 containerd[1451]: time="2024-09-04T17:40:28.966900149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:28.969483 containerd[1451]: time="2024-09-04T17:40:28.969449852Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 3.37465789s" Sep 4 17:40:28.969654 containerd[1451]: time="2024-09-04T17:40:28.969632530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 17:40:28.999259 containerd[1451]: time="2024-09-04T17:40:28.999212250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:40:29.570090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:40:29.581098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:29.739736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:29.744243 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:29.800627 kubelet[1915]: E0904 17:40:29.800580 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:40:29.803501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:40:29.803666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:40:31.582212 containerd[1451]: time="2024-09-04T17:40:31.581197909Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206214" Sep 4 17:40:31.582212 containerd[1451]: time="2024-09-04T17:40:31.581350708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:31.583221 containerd[1451]: time="2024-09-04T17:40:31.583179408Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:31.587080 containerd[1451]: time="2024-09-04T17:40:31.587049032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:31.588394 containerd[1451]: time="2024-09-04T17:40:31.588347819Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 2.588832855s" Sep 4 17:40:31.588449 containerd[1451]: time="2024-09-04T17:40:31.588394299Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 17:40:31.614097 containerd[1451]: time="2024-09-04T17:40:31.614060081Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:40:34.301455 containerd[1451]: time="2024-09-04T17:40:34.301313553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:34.343027 containerd[1451]: time="2024-09-04T17:40:34.342849188Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321515" Sep 4 17:40:34.352280 containerd[1451]: time="2024-09-04T17:40:34.352142060Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:34.362621 containerd[1451]: time="2024-09-04T17:40:34.362440543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:34.366182 containerd[1451]: time="2024-09-04T17:40:34.365942056Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 2.751681858s" Sep 4 17:40:34.366182 containerd[1451]: time="2024-09-04T17:40:34.366026473Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 17:40:34.424213 containerd[1451]: time="2024-09-04T17:40:34.423999666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:40:37.185103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255406960.mount: Deactivated successfully. Sep 4 17:40:37.484747 update_engine[1431]: I0904 17:40:37.484616 1431 update_attempter.cc:509] Updating boot flags... Sep 4 17:40:37.543585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1954) Sep 4 17:40:38.118581 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1952) Sep 4 17:40:38.666255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1952) Sep 4 17:40:38.717179 containerd[1451]: time="2024-09-04T17:40:38.717005488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:38.734575 containerd[1451]: time="2024-09-04T17:40:38.734488620Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600388" Sep 4 17:40:38.769384 containerd[1451]: time="2024-09-04T17:40:38.768067551Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:38.822147 containerd[1451]: time="2024-09-04T17:40:38.822035240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:38.824036 containerd[1451]: time="2024-09-04T17:40:38.823956634Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 4.39977266s" Sep 4 17:40:38.824237 containerd[1451]: time="2024-09-04T17:40:38.824194740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 17:40:38.882892 containerd[1451]: time="2024-09-04T17:40:38.882771319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:40:39.679167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505322962.mount: Deactivated successfully. Sep 4 17:40:39.820021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 17:40:39.831120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:39.993346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:40.003937 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:40.310819 kubelet[1987]: E0904 17:40:40.310540 1987 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:40:40.314371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:40:40.314543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:40:41.731938 containerd[1451]: time="2024-09-04T17:40:41.731719085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:41.734371 containerd[1451]: time="2024-09-04T17:40:41.733953585Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Sep 4 17:40:41.736549 containerd[1451]: time="2024-09-04T17:40:41.735850193Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:41.740329 containerd[1451]: time="2024-09-04T17:40:41.740301282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:41.741825 containerd[1451]: time="2024-09-04T17:40:41.741799990Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.858957974s" Sep 4 17:40:41.741923 containerd[1451]: time="2024-09-04T17:40:41.741906262Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:40:41.771332 containerd[1451]: time="2024-09-04T17:40:41.771268963Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:40:42.401983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748894469.mount: Deactivated successfully. Sep 4 17:40:42.410654 containerd[1451]: time="2024-09-04T17:40:42.410489246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:42.415151 containerd[1451]: time="2024-09-04T17:40:42.414552685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Sep 4 17:40:42.416727 containerd[1451]: time="2024-09-04T17:40:42.416657644Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:42.426038 containerd[1451]: time="2024-09-04T17:40:42.425963004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:42.429002 containerd[1451]: time="2024-09-04T17:40:42.428195931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 656.849123ms" Sep 4 17:40:42.429002 containerd[1451]: time="2024-09-04T17:40:42.428286689Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:40:42.486893 containerd[1451]: time="2024-09-04T17:40:42.486816436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:40:43.228493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280685533.mount: Deactivated successfully. Sep 4 17:40:46.457967 containerd[1451]: time="2024-09-04T17:40:46.457857447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:46.459472 containerd[1451]: time="2024-09-04T17:40:46.459416029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Sep 4 17:40:46.460697 containerd[1451]: time="2024-09-04T17:40:46.460659461Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:46.464668 containerd[1451]: time="2024-09-04T17:40:46.464626126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:40:46.466253 containerd[1451]: time="2024-09-04T17:40:46.466206547Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.979314002s" Sep 4 17:40:46.466312 containerd[1451]: time="2024-09-04T17:40:46.466253639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:40:50.319932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 17:40:50.330093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:50.729713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:50.731302 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:40:50.880038 kubelet[2154]: E0904 17:40:50.879995 2154 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:40:50.884189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:40:50.884327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:40:51.455483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:51.471103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:51.525377 systemd[1]: Reloading requested from client PID 2168 ('systemctl') (unit session-11.scope)... Sep 4 17:40:51.525403 systemd[1]: Reloading... Sep 4 17:40:51.623622 zram_generator::config[2202]: No configuration found. Sep 4 17:40:51.784752 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:40:51.873358 systemd[1]: Reloading finished in 347 ms. Sep 4 17:40:51.921307 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:40:51.921387 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:40:51.921812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:51.927213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:40:52.112878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:40:52.112887 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:40:52.622282 kubelet[2271]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:40:52.622282 kubelet[2271]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:40:52.622282 kubelet[2271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:40:52.622282 kubelet[2271]: I0904 17:40:52.622223 2271 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:40:53.262977 kubelet[2271]: I0904 17:40:53.262913 2271 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:40:53.262977 kubelet[2271]: I0904 17:40:53.262948 2271 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:40:53.263294 kubelet[2271]: I0904 17:40:53.263228 2271 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:40:53.323397 kubelet[2271]: E0904 17:40:53.322496 2271 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.331494 kubelet[2271]: I0904 17:40:53.331438 2271 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:40:53.360901 kubelet[2271]: I0904 17:40:53.360806 2271 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:40:53.370570 kubelet[2271]: I0904 17:40:53.370449 2271 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:40:53.372976 kubelet[2271]: I0904 17:40:53.372895 2271 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:40:53.372976 kubelet[2271]: I0904 17:40:53.372967 2271 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:40:53.373303 kubelet[2271]: I0904 17:40:53.372996 2271 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:40:53.373303 kubelet[2271]: I0904 17:40:53.373210 2271 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:40:53.373443 kubelet[2271]: I0904 17:40:53.373397 2271 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:40:53.373443 kubelet[2271]: I0904 17:40:53.373432 2271 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:40:53.373605 kubelet[2271]: I0904 17:40:53.373495 2271 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:40:53.373605 kubelet[2271]: I0904 17:40:53.373571 2271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:40:53.377587 kubelet[2271]: W0904 17:40:53.377063 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.377587 kubelet[2271]: E0904 17:40:53.377164 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.377587 kubelet[2271]: W0904 17:40:53.377306 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-1-d-945344e89d.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.377587 kubelet[2271]: E0904 17:40:53.377402 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-1-d-945344e89d.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.378181 kubelet[2271]: I0904 17:40:53.378148 2271 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:40:53.389310 kubelet[2271]: I0904 17:40:53.389273 2271 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:40:53.392667 kubelet[2271]: W0904 17:40:53.392591 2271 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:40:53.394013 kubelet[2271]: I0904 17:40:53.393797 2271 server.go:1256] "Started kubelet" Sep 4 17:40:53.394401 kubelet[2271]: I0904 17:40:53.394212 2271 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:40:53.405075 kubelet[2271]: I0904 17:40:53.402924 2271 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:40:53.412420 kubelet[2271]: I0904 17:40:53.412350 2271 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:40:53.414576 kubelet[2271]: I0904 17:40:53.412936 2271 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:40:53.414576 kubelet[2271]: I0904 17:40:53.413802 2271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:40:53.419910 kubelet[2271]: E0904 17:40:53.419840 2271 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.44:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-2-1-d-945344e89d.novalocal.17f21b53aa8a60ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-1-d-945344e89d.novalocal,UID:ci-3975-2-1-d-945344e89d.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-2-1-d-945344e89d.novalocal,},FirstTimestamp:2024-09-04 17:40:53.393752298 +0000 UTC m=+1.275031043,LastTimestamp:2024-09-04 17:40:53.393752298 +0000 UTC m=+1.275031043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-1-d-945344e89d.novalocal,}" Sep 4 17:40:53.420571 kubelet[2271]: I0904 17:40:53.420472 2271 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:40:53.421187 kubelet[2271]: I0904 17:40:53.421144 2271 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:40:53.421292 kubelet[2271]: I0904 17:40:53.421224 2271 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:40:53.421743 kubelet[2271]: W0904 17:40:53.421681 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.421849 kubelet[2271]: E0904 17:40:53.421753 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.421927 kubelet[2271]: E0904 17:40:53.421849 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-1-d-945344e89d.novalocal?timeout=10s\": dial tcp 172.24.4.44:6443: connect: connection refused" interval="200ms" Sep 4 17:40:53.423600 kubelet[2271]: I0904 17:40:53.423497 2271 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:40:53.424381 kubelet[2271]: E0904 17:40:53.424346 2271 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:40:53.424823 kubelet[2271]: I0904 17:40:53.424747 2271 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:40:53.427918 kubelet[2271]: I0904 17:40:53.427880 2271 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:40:53.461888 kubelet[2271]: I0904 17:40:53.461833 2271 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:40:53.461888 kubelet[2271]: I0904 17:40:53.461855 2271 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:40:53.461888 kubelet[2271]: I0904 17:40:53.461870 2271 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:40:53.468659 kubelet[2271]: I0904 17:40:53.468561 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:40:53.473023 kubelet[2271]: I0904 17:40:53.471292 2271 policy_none.go:49] "None policy: Start" Sep 4 17:40:53.473692 kubelet[2271]: I0904 17:40:53.473307 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:40:53.473692 kubelet[2271]: I0904 17:40:53.473363 2271 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:40:53.473692 kubelet[2271]: I0904 17:40:53.473389 2271 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:40:53.473692 kubelet[2271]: E0904 17:40:53.473447 2271 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:40:53.476286 kubelet[2271]: W0904 17:40:53.476265 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.476447 kubelet[2271]: E0904 17:40:53.476419 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:53.476937 kubelet[2271]: I0904 17:40:53.476921 2271 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:40:53.477168 kubelet[2271]: I0904 17:40:53.477064 2271 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:40:53.490080 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:40:53.501217 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:40:53.505938 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:40:53.514761 kubelet[2271]: I0904 17:40:53.513363 2271 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:40:53.514761 kubelet[2271]: I0904 17:40:53.513677 2271 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:40:53.518174 kubelet[2271]: E0904 17:40:53.518106 2271 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:40:53.522858 kubelet[2271]: I0904 17:40:53.522556 2271 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.522858 kubelet[2271]: E0904 17:40:53.522840 2271 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.44:6443/api/v1/nodes\": dial tcp 172.24.4.44:6443: connect: connection refused" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.574491 kubelet[2271]: I0904 17:40:53.574461 2271 topology_manager.go:215] "Topology Admit Handler" podUID="b0c821bd565aea92700499e01e6a2737" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.577384 kubelet[2271]: I0904 17:40:53.577336 2271 topology_manager.go:215] "Topology Admit Handler" podUID="87841a2667b336672d0d2c4e81321f93" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.581080 kubelet[2271]: I0904 17:40:53.580825 2271 topology_manager.go:215] "Topology Admit Handler" podUID="64403f817756a279a369029eec95b870" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.597906 systemd[1]: Created slice kubepods-burstable-podb0c821bd565aea92700499e01e6a2737.slice - libcontainer container kubepods-burstable-podb0c821bd565aea92700499e01e6a2737.slice. Sep 4 17:40:53.622164 kubelet[2271]: I0904 17:40:53.622114 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87841a2667b336672d0d2c4e81321f93-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"87841a2667b336672d0d2c4e81321f93\") " pod="kube-system/kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.623614 kubelet[2271]: E0904 17:40:53.623486 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-1-d-945344e89d.novalocal?timeout=10s\": dial tcp 172.24.4.44:6443: connect: connection refused" interval="400ms" Sep 4 17:40:53.624480 kubelet[2271]: I0904 17:40:53.623786 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.624480 kubelet[2271]: I0904 17:40:53.624252 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.624732 kubelet[2271]: I0904 17:40:53.624711 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.625003 kubelet[2271]: I0904 17:40:53.624943 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0c821bd565aea92700499e01e6a2737-kubeconfig\") pod \"kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"b0c821bd565aea92700499e01e6a2737\") " pod="kube-system/kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.625103 kubelet[2271]: I0904 17:40:53.625062 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87841a2667b336672d0d2c4e81321f93-ca-certs\") pod \"kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"87841a2667b336672d0d2c4e81321f93\") " pod="kube-system/kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.625179 kubelet[2271]: I0904 17:40:53.625131 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87841a2667b336672d0d2c4e81321f93-k8s-certs\") pod \"kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"87841a2667b336672d0d2c4e81321f93\") " pod="kube-system/kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.625262 kubelet[2271]: I0904 17:40:53.625192 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-ca-certs\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.625262 kubelet[2271]: I0904 17:40:53.625259 2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.633169 systemd[1]: Created slice kubepods-burstable-pod87841a2667b336672d0d2c4e81321f93.slice - libcontainer container kubepods-burstable-pod87841a2667b336672d0d2c4e81321f93.slice. Sep 4 17:40:53.645490 systemd[1]: Created slice kubepods-burstable-pod64403f817756a279a369029eec95b870.slice - libcontainer container kubepods-burstable-pod64403f817756a279a369029eec95b870.slice. Sep 4 17:40:53.726958 kubelet[2271]: I0904 17:40:53.726290 2271 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.727221 kubelet[2271]: E0904 17:40:53.727071 2271 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.44:6443/api/v1/nodes\": dial tcp 172.24.4.44:6443: connect: connection refused" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:53.927992 containerd[1451]: time="2024-09-04T17:40:53.927645531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal,Uid:b0c821bd565aea92700499e01e6a2737,Namespace:kube-system,Attempt:0,}" Sep 4 17:40:53.953728 containerd[1451]: time="2024-09-04T17:40:53.952112512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal,Uid:87841a2667b336672d0d2c4e81321f93,Namespace:kube-system,Attempt:0,}" Sep 4 17:40:53.954441 containerd[1451]: time="2024-09-04T17:40:53.954193956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal,Uid:64403f817756a279a369029eec95b870,Namespace:kube-system,Attempt:0,}" Sep 4 17:40:54.024799 kubelet[2271]: E0904 17:40:54.024656 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-1-d-945344e89d.novalocal?timeout=10s\": dial tcp 172.24.4.44:6443: connect: connection refused" interval="800ms" Sep 4 17:40:54.132605 kubelet[2271]: I0904 17:40:54.132007 2271 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:54.133158 kubelet[2271]: E0904 17:40:54.133109 2271 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.44:6443/api/v1/nodes\": dial tcp 172.24.4.44:6443: connect: connection refused" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:54.381203 kubelet[2271]: W0904 17:40:54.381020 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-1-d-945344e89d.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:54.381675 kubelet[2271]: E0904 17:40:54.381625 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-1-d-945344e89d.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:54.457566 kubelet[2271]: W0904 17:40:54.457415 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:54.457566 kubelet[2271]: E0904 17:40:54.457499 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:54.616290 kubelet[2271]: W0904 17:40:54.616063 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:54.616290 kubelet[2271]: E0904 17:40:54.616235 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:54.726423 kubelet[2271]: W0904 17:40:54.725288 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:54.727439 kubelet[2271]: E0904 17:40:54.727389 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:54.826346 kubelet[2271]: E0904 17:40:54.826257 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-1-d-945344e89d.novalocal?timeout=10s\": dial tcp 172.24.4.44:6443: connect: connection refused" interval="1.6s" Sep 4 17:40:54.938264 kubelet[2271]: I0904 17:40:54.937496 2271 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:54.938264 kubelet[2271]: E0904 17:40:54.938217 2271 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.44:6443/api/v1/nodes\": dial tcp 172.24.4.44:6443: connect: connection refused" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:55.195869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1173619094.mount: Deactivated successfully. Sep 4 17:40:55.209270 containerd[1451]: time="2024-09-04T17:40:55.209097662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:40:55.215094 containerd[1451]: time="2024-09-04T17:40:55.214992912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 4 17:40:55.218568 containerd[1451]: time="2024-09-04T17:40:55.216651795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:40:55.219850 containerd[1451]: time="2024-09-04T17:40:55.219776293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:40:55.220828 containerd[1451]: time="2024-09-04T17:40:55.220727082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:40:55.223051 containerd[1451]: time="2024-09-04T17:40:55.222991199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:40:55.226946 containerd[1451]: time="2024-09-04T17:40:55.226873568Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:40:55.231654 containerd[1451]: time="2024-09-04T17:40:55.231571423Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.276734311s" Sep 4 17:40:55.232823 containerd[1451]: time="2024-09-04T17:40:55.232769570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:40:55.238414 containerd[1451]: time="2024-09-04T17:40:55.238362943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.2860697s" Sep 4 17:40:55.239895 containerd[1451]: time="2024-09-04T17:40:55.239752615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.301482794s" Sep 4 17:40:55.328499 kubelet[2271]: E0904 17:40:55.327337 2271 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:55.879475 containerd[1451]: time="2024-09-04T17:40:55.879351145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:40:55.879475 containerd[1451]: time="2024-09-04T17:40:55.879411424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:40:55.879475 containerd[1451]: time="2024-09-04T17:40:55.879436921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:40:55.879746 containerd[1451]: time="2024-09-04T17:40:55.879454076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:40:55.882271 containerd[1451]: time="2024-09-04T17:40:55.882198453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:40:55.882363 containerd[1451]: time="2024-09-04T17:40:55.882259814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:40:55.882363 containerd[1451]: time="2024-09-04T17:40:55.882279275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:40:55.882363 containerd[1451]: time="2024-09-04T17:40:55.882293342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:40:55.893541 containerd[1451]: time="2024-09-04T17:40:55.892344206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:40:55.893864 containerd[1451]: time="2024-09-04T17:40:55.893764446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:40:55.893864 containerd[1451]: time="2024-09-04T17:40:55.893839714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:40:55.894158 containerd[1451]: time="2024-09-04T17:40:55.894039218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:40:55.920747 systemd[1]: Started cri-containerd-6a2ab21e46d78a1b18cd01389d10e79de40ac47fe03449bbe6061880c50772d2.scope - libcontainer container 6a2ab21e46d78a1b18cd01389d10e79de40ac47fe03449bbe6061880c50772d2. Sep 4 17:40:55.922016 systemd[1]: Started cri-containerd-e528b1152f893a8ab01bccaf372cc4224686f106dc534ad0a88d688c69fbf33f.scope - libcontainer container e528b1152f893a8ab01bccaf372cc4224686f106dc534ad0a88d688c69fbf33f. Sep 4 17:40:55.934641 systemd[1]: Started cri-containerd-079323f00c34b4b44ac5664fb5f9e1dc31e38c19f6cd667f9f543bd84d3da871.scope - libcontainer container 079323f00c34b4b44ac5664fb5f9e1dc31e38c19f6cd667f9f543bd84d3da871. Sep 4 17:40:56.120104 containerd[1451]: time="2024-09-04T17:40:56.119247810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal,Uid:64403f817756a279a369029eec95b870,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a2ab21e46d78a1b18cd01389d10e79de40ac47fe03449bbe6061880c50772d2\"" Sep 4 17:40:56.120104 containerd[1451]: time="2024-09-04T17:40:56.119971896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal,Uid:b0c821bd565aea92700499e01e6a2737,Namespace:kube-system,Attempt:0,} returns sandbox id \"e528b1152f893a8ab01bccaf372cc4224686f106dc534ad0a88d688c69fbf33f\"" Sep 4 17:40:56.125443 containerd[1451]: time="2024-09-04T17:40:56.125305237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal,Uid:87841a2667b336672d0d2c4e81321f93,Namespace:kube-system,Attempt:0,} returns sandbox id \"079323f00c34b4b44ac5664fb5f9e1dc31e38c19f6cd667f9f543bd84d3da871\"" Sep 4 17:40:56.135708 containerd[1451]: time="2024-09-04T17:40:56.134292075Z" level=info msg="CreateContainer within sandbox \"6a2ab21e46d78a1b18cd01389d10e79de40ac47fe03449bbe6061880c50772d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:40:56.136038 containerd[1451]: time="2024-09-04T17:40:56.136009387Z" level=info msg="CreateContainer within sandbox \"079323f00c34b4b44ac5664fb5f9e1dc31e38c19f6cd667f9f543bd84d3da871\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:40:56.138061 containerd[1451]: time="2024-09-04T17:40:56.137949623Z" level=info msg="CreateContainer within sandbox \"e528b1152f893a8ab01bccaf372cc4224686f106dc534ad0a88d688c69fbf33f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:40:56.409332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461695093.mount: Deactivated successfully. Sep 4 17:40:56.419842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468861739.mount: Deactivated successfully. Sep 4 17:40:56.429097 kubelet[2271]: E0904 17:40:56.429030 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-1-d-945344e89d.novalocal?timeout=10s\": dial tcp 172.24.4.44:6443: connect: connection refused" interval="3.2s" Sep 4 17:40:56.435321 containerd[1451]: time="2024-09-04T17:40:56.435226756Z" level=info msg="CreateContainer within sandbox \"6a2ab21e46d78a1b18cd01389d10e79de40ac47fe03449bbe6061880c50772d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7cc541236fb3d45961c3ef0cb6cd0037e84522314b28b342ec447ba6a28c7910\"" Sep 4 17:40:56.437810 containerd[1451]: time="2024-09-04T17:40:56.437733504Z" level=info msg="StartContainer for \"7cc541236fb3d45961c3ef0cb6cd0037e84522314b28b342ec447ba6a28c7910\"" Sep 4 17:40:56.450815 containerd[1451]: time="2024-09-04T17:40:56.450587727Z" level=info msg="CreateContainer within sandbox \"e528b1152f893a8ab01bccaf372cc4224686f106dc534ad0a88d688c69fbf33f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6bf5a93c2081eba6baf5f2ee00c3035db257eac20052de002b76dd9e964382bb\"" Sep 4 17:40:56.454565 containerd[1451]: time="2024-09-04T17:40:56.452838072Z" level=info msg="StartContainer for \"6bf5a93c2081eba6baf5f2ee00c3035db257eac20052de002b76dd9e964382bb\"" Sep 4 17:40:56.455883 containerd[1451]: time="2024-09-04T17:40:56.455823584Z" level=info msg="CreateContainer within sandbox \"079323f00c34b4b44ac5664fb5f9e1dc31e38c19f6cd667f9f543bd84d3da871\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ef5af40698b7bee2d8f52e3b8cef01243e0d34d31ac954fc6234a3b2e12ce5b0\"" Sep 4 17:40:56.457154 containerd[1451]: time="2024-09-04T17:40:56.457077634Z" level=info msg="StartContainer for \"ef5af40698b7bee2d8f52e3b8cef01243e0d34d31ac954fc6234a3b2e12ce5b0\"" Sep 4 17:40:56.510730 systemd[1]: Started cri-containerd-7cc541236fb3d45961c3ef0cb6cd0037e84522314b28b342ec447ba6a28c7910.scope - libcontainer container 7cc541236fb3d45961c3ef0cb6cd0037e84522314b28b342ec447ba6a28c7910. Sep 4 17:40:56.525748 systemd[1]: Started cri-containerd-6bf5a93c2081eba6baf5f2ee00c3035db257eac20052de002b76dd9e964382bb.scope - libcontainer container 6bf5a93c2081eba6baf5f2ee00c3035db257eac20052de002b76dd9e964382bb. Sep 4 17:40:56.528955 systemd[1]: Started cri-containerd-ef5af40698b7bee2d8f52e3b8cef01243e0d34d31ac954fc6234a3b2e12ce5b0.scope - libcontainer container ef5af40698b7bee2d8f52e3b8cef01243e0d34d31ac954fc6234a3b2e12ce5b0. Sep 4 17:40:56.541564 kubelet[2271]: I0904 17:40:56.540891 2271 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:56.541564 kubelet[2271]: E0904 17:40:56.541346 2271 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.44:6443/api/v1/nodes\": dial tcp 172.24.4.44:6443: connect: connection refused" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:40:56.549145 kubelet[2271]: W0904 17:40:56.549103 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:56.549213 kubelet[2271]: E0904 17:40:56.549157 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:56.589467 kubelet[2271]: W0904 17:40:56.589421 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:56.589467 kubelet[2271]: E0904 17:40:56.589470 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:56.608775 containerd[1451]: time="2024-09-04T17:40:56.607893764Z" level=info msg="StartContainer for \"7cc541236fb3d45961c3ef0cb6cd0037e84522314b28b342ec447ba6a28c7910\" returns successfully" Sep 4 17:40:56.621074 containerd[1451]: time="2024-09-04T17:40:56.621030321Z" level=info msg="StartContainer for \"ef5af40698b7bee2d8f52e3b8cef01243e0d34d31ac954fc6234a3b2e12ce5b0\" returns successfully" Sep 4 17:40:56.658377 containerd[1451]: time="2024-09-04T17:40:56.658316350Z" level=info msg="StartContainer for \"6bf5a93c2081eba6baf5f2ee00c3035db257eac20052de002b76dd9e964382bb\" returns successfully" Sep 4 17:40:57.179250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121682939.mount: Deactivated successfully. Sep 4 17:40:57.261080 kubelet[2271]: W0904 17:40:57.261013 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-1-d-945344e89d.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:57.261080 kubelet[2271]: E0904 17:40:57.261078 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-1-d-945344e89d.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:57.355275 kubelet[2271]: W0904 17:40:57.355214 2271 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:57.355275 kubelet[2271]: E0904 17:40:57.355275 2271 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Sep 4 17:40:59.747258 kubelet[2271]: I0904 17:40:59.747197 2271 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:00.072386 kubelet[2271]: E0904 17:41:00.072350 2271 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-2-1-d-945344e89d.novalocal\" not found" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:00.130281 kubelet[2271]: I0904 17:41:00.130040 2271 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:00.153452 kubelet[2271]: E0904 17:41:00.153363 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:00.254134 kubelet[2271]: E0904 17:41:00.254075 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:00.355155 kubelet[2271]: E0904 17:41:00.354956 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:00.456161 kubelet[2271]: E0904 17:41:00.456077 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:00.556860 kubelet[2271]: E0904 17:41:00.556766 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:00.657252 kubelet[2271]: E0904 17:41:00.657037 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:00.758142 kubelet[2271]: E0904 17:41:00.758070 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:00.858376 kubelet[2271]: E0904 17:41:00.858281 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:00.959038 kubelet[2271]: E0904 17:41:00.958557 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.058965 kubelet[2271]: E0904 17:41:01.058894 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.159899 kubelet[2271]: E0904 17:41:01.159850 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.260922 kubelet[2271]: E0904 17:41:01.260795 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.361138 kubelet[2271]: E0904 17:41:01.361092 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.462111 kubelet[2271]: E0904 17:41:01.462055 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.562858 kubelet[2271]: E0904 17:41:01.562605 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.663383 kubelet[2271]: E0904 17:41:01.663325 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.764226 kubelet[2271]: E0904 17:41:01.764168 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:01.865318 kubelet[2271]: E0904 17:41:01.865248 2271 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-1-d-945344e89d.novalocal\" not found" Sep 4 17:41:02.385741 kubelet[2271]: I0904 17:41:02.385178 2271 apiserver.go:52] "Watching apiserver" Sep 4 17:41:02.422103 kubelet[2271]: I0904 17:41:02.421994 2271 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:41:02.830108 systemd[1]: Reloading requested from client PID 2546 ('systemctl') (unit session-11.scope)... Sep 4 17:41:02.830135 systemd[1]: Reloading... Sep 4 17:41:02.939633 zram_generator::config[2586]: No configuration found. Sep 4 17:41:03.101681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:41:03.210063 systemd[1]: Reloading finished in 379 ms. Sep 4 17:41:03.264546 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:03.266570 kubelet[2271]: I0904 17:41:03.265407 2271 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:41:03.280911 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:41:03.281208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:03.281280 systemd[1]: kubelet.service: Consumed 1.453s CPU time, 110.0M memory peak, 0B memory swap peak. Sep 4 17:41:03.286068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:41:03.695232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:41:03.716075 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:41:03.940956 kubelet[2647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:41:03.940956 kubelet[2647]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:41:03.940956 kubelet[2647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:41:03.942218 kubelet[2647]: I0904 17:41:03.941440 2647 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:41:03.950278 kubelet[2647]: I0904 17:41:03.949333 2647 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:41:03.950278 kubelet[2647]: I0904 17:41:03.949369 2647 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:41:03.950278 kubelet[2647]: I0904 17:41:03.949634 2647 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:41:03.951929 kubelet[2647]: I0904 17:41:03.951496 2647 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:41:03.958687 kubelet[2647]: I0904 17:41:03.956970 2647 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:41:03.985601 kubelet[2647]: I0904 17:41:03.985558 2647 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:41:03.986054 kubelet[2647]: I0904 17:41:03.986041 2647 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:41:03.986457 kubelet[2647]: I0904 17:41:03.986440 2647 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:41:03.987419 kubelet[2647]: I0904 17:41:03.986640 2647 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:41:03.987419 kubelet[2647]: I0904 17:41:03.986659 2647 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:41:03.987419 kubelet[2647]: I0904 17:41:03.986703 2647 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:41:03.987419 kubelet[2647]: I0904 17:41:03.986799 2647 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:41:03.987419 kubelet[2647]: I0904 17:41:03.986816 2647 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:41:03.987419 kubelet[2647]: I0904 17:41:03.986846 2647 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:41:03.987419 kubelet[2647]: I0904 17:41:03.986869 2647 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:41:03.993074 kubelet[2647]: I0904 17:41:03.993048 2647 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:41:03.996183 kubelet[2647]: I0904 17:41:03.996164 2647 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:41:03.996872 kubelet[2647]: I0904 17:41:03.996856 2647 server.go:1256] "Started kubelet" Sep 4 17:41:04.003477 kubelet[2647]: I0904 17:41:04.003260 2647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:41:04.020358 kubelet[2647]: I0904 17:41:04.020312 2647 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:41:04.022296 kubelet[2647]: I0904 17:41:04.022253 2647 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:41:04.029801 kubelet[2647]: I0904 17:41:04.029745 2647 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:41:04.031689 kubelet[2647]: I0904 17:41:04.022618 2647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:41:04.031689 kubelet[2647]: I0904 17:41:04.022618 2647 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:41:04.031689 kubelet[2647]: I0904 17:41:04.023893 2647 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:41:04.031689 kubelet[2647]: I0904 17:41:04.030858 2647 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:41:04.031689 kubelet[2647]: I0904 17:41:04.031452 2647 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:41:04.033554 kubelet[2647]: I0904 17:41:04.032362 2647 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:41:04.037392 kubelet[2647]: E0904 17:41:04.036726 2647 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:41:04.038985 sudo[2664]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 17:41:04.039340 sudo[2664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 4 17:41:04.049507 kubelet[2647]: I0904 17:41:04.049030 2647 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:41:04.060588 kubelet[2647]: I0904 17:41:04.060394 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:41:04.062878 kubelet[2647]: I0904 17:41:04.062394 2647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:41:04.062878 kubelet[2647]: I0904 17:41:04.062435 2647 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:41:04.062878 kubelet[2647]: I0904 17:41:04.062460 2647 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:41:04.062878 kubelet[2647]: E0904 17:41:04.062568 2647 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:41:04.135568 kubelet[2647]: I0904 17:41:04.135538 2647 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.162713 kubelet[2647]: E0904 17:41:04.162662 2647 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:41:04.164292 kubelet[2647]: I0904 17:41:04.163977 2647 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:41:04.164292 kubelet[2647]: I0904 17:41:04.164018 2647 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:41:04.164292 kubelet[2647]: I0904 17:41:04.164039 2647 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:41:04.164292 kubelet[2647]: I0904 17:41:04.164198 2647 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:41:04.164292 kubelet[2647]: I0904 17:41:04.164223 2647 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:41:04.164292 kubelet[2647]: I0904 17:41:04.164231 2647 policy_none.go:49] "None policy: Start" Sep 4 17:41:04.165149 kubelet[2647]: I0904 17:41:04.165115 2647 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:41:04.165200 kubelet[2647]: I0904 17:41:04.165157 2647 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:41:04.165393 kubelet[2647]: I0904 17:41:04.165374 2647 state_mem.go:75] "Updated machine memory state" Sep 4 17:41:04.174526 kubelet[2647]: I0904 17:41:04.174479 2647 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:41:04.174800 kubelet[2647]: I0904 17:41:04.174769 2647 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:41:04.364119 kubelet[2647]: I0904 17:41:04.363891 2647 topology_manager.go:215] "Topology Admit Handler" podUID="87841a2667b336672d0d2c4e81321f93" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.364119 kubelet[2647]: I0904 17:41:04.364071 2647 topology_manager.go:215] "Topology Admit Handler" podUID="64403f817756a279a369029eec95b870" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.364340 kubelet[2647]: I0904 17:41:04.364158 2647 topology_manager.go:215] "Topology Admit Handler" podUID="b0c821bd565aea92700499e01e6a2737" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.409439 kubelet[2647]: I0904 17:41:04.409367 2647 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.409701 kubelet[2647]: I0904 17:41:04.409623 2647 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.436829 kubelet[2647]: I0904 17:41:04.435705 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87841a2667b336672d0d2c4e81321f93-ca-certs\") pod \"kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"87841a2667b336672d0d2c4e81321f93\") " pod="kube-system/kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.436829 kubelet[2647]: I0904 17:41:04.435838 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87841a2667b336672d0d2c4e81321f93-k8s-certs\") pod \"kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"87841a2667b336672d0d2c4e81321f93\") " pod="kube-system/kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.436829 kubelet[2647]: I0904 17:41:04.435909 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-ca-certs\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.436829 kubelet[2647]: I0904 17:41:04.436001 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.436829 kubelet[2647]: I0904 17:41:04.436076 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87841a2667b336672d0d2c4e81321f93-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"87841a2667b336672d0d2c4e81321f93\") " pod="kube-system/kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.437452 kubelet[2647]: I0904 17:41:04.436133 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.437452 kubelet[2647]: I0904 17:41:04.436194 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.437452 kubelet[2647]: I0904 17:41:04.436258 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64403f817756a279a369029eec95b870-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"64403f817756a279a369029eec95b870\") " pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.437452 kubelet[2647]: I0904 17:41:04.436328 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0c821bd565aea92700499e01e6a2737-kubeconfig\") pod \"kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal\" (UID: \"b0c821bd565aea92700499e01e6a2737\") " pod="kube-system/kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal" Sep 4 17:41:04.451615 kubelet[2647]: W0904 17:41:04.450801 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:41:04.451615 kubelet[2647]: W0904 17:41:04.450871 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:41:04.451615 kubelet[2647]: W0904 17:41:04.451025 2647 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:41:04.991415 kubelet[2647]: I0904 17:41:04.990787 2647 apiserver.go:52] "Watching apiserver" Sep 4 17:41:05.032856 kubelet[2647]: I0904 17:41:05.032720 2647 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:41:05.222800 kubelet[2647]: I0904 17:41:05.222682 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-2-1-d-945344e89d.novalocal" podStartSLOduration=1.222597855 podStartE2EDuration="1.222597855s" podCreationTimestamp="2024-09-04 17:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:05.196555157 +0000 UTC m=+1.469137299" watchObservedRunningTime="2024-09-04 17:41:05.222597855 +0000 UTC m=+1.495179907" Sep 4 17:41:05.498484 kubelet[2647]: I0904 17:41:05.498412 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-2-1-d-945344e89d.novalocal" podStartSLOduration=1.4983182259999999 podStartE2EDuration="1.498318226s" podCreationTimestamp="2024-09-04 17:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:05.223834899 +0000 UTC m=+1.496416961" watchObservedRunningTime="2024-09-04 17:41:05.498318226 +0000 UTC m=+1.770900338" Sep 4 17:41:05.538116 kubelet[2647]: I0904 17:41:05.537952 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-2-1-d-945344e89d.novalocal" podStartSLOduration=1.53789946 podStartE2EDuration="1.53789946s" podCreationTimestamp="2024-09-04 17:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:05.500813644 +0000 UTC m=+1.773395776" watchObservedRunningTime="2024-09-04 17:41:05.53789946 +0000 UTC m=+1.810481512" Sep 4 17:41:05.668844 sudo[2664]: pam_unix(sudo:session): session closed for user root Sep 4 17:41:08.889742 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 4 17:41:09.071834 sshd[1702]: pam_unix(sshd:session): session closed for user core Sep 4 17:41:09.081877 systemd[1]: sshd@8-172.24.4.44:22-172.24.4.1:41796.service: Deactivated successfully. Sep 4 17:41:09.089052 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:41:09.090004 systemd[1]: session-11.scope: Consumed 8.818s CPU time, 135.9M memory peak, 0B memory swap peak. Sep 4 17:41:09.095460 systemd-logind[1430]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:41:09.098952 systemd-logind[1430]: Removed session 11. Sep 4 17:41:17.318803 kubelet[2647]: I0904 17:41:17.318765 2647 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:41:17.320052 containerd[1451]: time="2024-09-04T17:41:17.319650786Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:41:17.320354 kubelet[2647]: I0904 17:41:17.319975 2647 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:41:18.155385 kubelet[2647]: I0904 17:41:18.155333 2647 topology_manager.go:215] "Topology Admit Handler" podUID="85f59690-0e6e-4266-87c7-8b75f05bbda6" podNamespace="kube-system" podName="kube-proxy-p5xtt" Sep 4 17:41:18.175271 systemd[1]: Created slice kubepods-besteffort-pod85f59690_0e6e_4266_87c7_8b75f05bbda6.slice - libcontainer container kubepods-besteffort-pod85f59690_0e6e_4266_87c7_8b75f05bbda6.slice. Sep 4 17:41:18.190307 kubelet[2647]: I0904 17:41:18.189671 2647 topology_manager.go:215] "Topology Admit Handler" podUID="c1f3db8c-5e00-4941-95f8-34a46b0462eb" podNamespace="kube-system" podName="cilium-pzh4x" Sep 4 17:41:18.202772 systemd[1]: Created slice kubepods-burstable-podc1f3db8c_5e00_4941_95f8_34a46b0462eb.slice - libcontainer container kubepods-burstable-podc1f3db8c_5e00_4941_95f8_34a46b0462eb.slice. Sep 4 17:41:18.227222 kubelet[2647]: I0904 17:41:18.226420 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-config-path\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227222 kubelet[2647]: I0904 17:41:18.226471 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-bpf-maps\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227222 kubelet[2647]: I0904 17:41:18.226495 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-hostproc\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227222 kubelet[2647]: I0904 17:41:18.226536 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-cgroup\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227222 kubelet[2647]: I0904 17:41:18.226573 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-host-proc-sys-kernel\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227222 kubelet[2647]: I0904 17:41:18.226598 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85f59690-0e6e-4266-87c7-8b75f05bbda6-kube-proxy\") pod \"kube-proxy-p5xtt\" (UID: \"85f59690-0e6e-4266-87c7-8b75f05bbda6\") " pod="kube-system/kube-proxy-p5xtt" Sep 4 17:41:18.227530 kubelet[2647]: I0904 17:41:18.226621 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-xtables-lock\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227530 kubelet[2647]: I0904 17:41:18.226643 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-host-proc-sys-net\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227530 kubelet[2647]: I0904 17:41:18.226668 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9q54\" (UniqueName: \"kubernetes.io/projected/c1f3db8c-5e00-4941-95f8-34a46b0462eb-kube-api-access-v9q54\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227530 kubelet[2647]: I0904 17:41:18.226691 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-run\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227530 kubelet[2647]: I0904 17:41:18.226715 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1f3db8c-5e00-4941-95f8-34a46b0462eb-clustermesh-secrets\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227660 kubelet[2647]: I0904 17:41:18.226738 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhcm\" (UniqueName: \"kubernetes.io/projected/85f59690-0e6e-4266-87c7-8b75f05bbda6-kube-api-access-njhcm\") pod \"kube-proxy-p5xtt\" (UID: \"85f59690-0e6e-4266-87c7-8b75f05bbda6\") " pod="kube-system/kube-proxy-p5xtt" Sep 4 17:41:18.227660 kubelet[2647]: I0904 17:41:18.226760 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cni-path\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227660 kubelet[2647]: I0904 17:41:18.226784 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-lib-modules\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227660 kubelet[2647]: I0904 17:41:18.226807 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85f59690-0e6e-4266-87c7-8b75f05bbda6-lib-modules\") pod \"kube-proxy-p5xtt\" (UID: \"85f59690-0e6e-4266-87c7-8b75f05bbda6\") " pod="kube-system/kube-proxy-p5xtt" Sep 4 17:41:18.227660 kubelet[2647]: I0904 17:41:18.226829 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1f3db8c-5e00-4941-95f8-34a46b0462eb-hubble-tls\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227660 kubelet[2647]: I0904 17:41:18.226864 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-etc-cni-netd\") pod \"cilium-pzh4x\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " pod="kube-system/cilium-pzh4x" Sep 4 17:41:18.227931 kubelet[2647]: I0904 17:41:18.226888 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85f59690-0e6e-4266-87c7-8b75f05bbda6-xtables-lock\") pod \"kube-proxy-p5xtt\" (UID: \"85f59690-0e6e-4266-87c7-8b75f05bbda6\") " pod="kube-system/kube-proxy-p5xtt" Sep 4 17:41:18.414991 kubelet[2647]: I0904 17:41:18.414060 2647 topology_manager.go:215] "Topology Admit Handler" podUID="a6921760-dfc8-4677-be61-4956071481ac" podNamespace="kube-system" podName="cilium-operator-5cc964979-hdv6s" Sep 4 17:41:18.423066 systemd[1]: Created slice kubepods-besteffort-poda6921760_dfc8_4677_be61_4956071481ac.slice - libcontainer container kubepods-besteffort-poda6921760_dfc8_4677_be61_4956071481ac.slice. Sep 4 17:41:18.428471 kubelet[2647]: I0904 17:41:18.428345 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6921760-dfc8-4677-be61-4956071481ac-cilium-config-path\") pod \"cilium-operator-5cc964979-hdv6s\" (UID: \"a6921760-dfc8-4677-be61-4956071481ac\") " pod="kube-system/cilium-operator-5cc964979-hdv6s" Sep 4 17:41:18.428471 kubelet[2647]: I0904 17:41:18.428407 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7wpq\" (UniqueName: \"kubernetes.io/projected/a6921760-dfc8-4677-be61-4956071481ac-kube-api-access-l7wpq\") pod \"cilium-operator-5cc964979-hdv6s\" (UID: \"a6921760-dfc8-4677-be61-4956071481ac\") " pod="kube-system/cilium-operator-5cc964979-hdv6s" Sep 4 17:41:18.492972 containerd[1451]: time="2024-09-04T17:41:18.492673271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p5xtt,Uid:85f59690-0e6e-4266-87c7-8b75f05bbda6,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:18.509320 containerd[1451]: time="2024-09-04T17:41:18.508772535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzh4x,Uid:c1f3db8c-5e00-4941-95f8-34a46b0462eb,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:18.557381 containerd[1451]: time="2024-09-04T17:41:18.557163845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:18.559883 containerd[1451]: time="2024-09-04T17:41:18.559684113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:18.559883 containerd[1451]: time="2024-09-04T17:41:18.559712853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:18.559883 containerd[1451]: time="2024-09-04T17:41:18.559726102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:18.566179 containerd[1451]: time="2024-09-04T17:41:18.565763597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:18.566179 containerd[1451]: time="2024-09-04T17:41:18.565829618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:18.566179 containerd[1451]: time="2024-09-04T17:41:18.565857287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:18.566179 containerd[1451]: time="2024-09-04T17:41:18.565875116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:18.595809 systemd[1]: Started cri-containerd-c89dfa6419989f3c34f40a3c705647b4be1bc39da2038bfa706a1ba7fec825b5.scope - libcontainer container c89dfa6419989f3c34f40a3c705647b4be1bc39da2038bfa706a1ba7fec825b5. Sep 4 17:41:18.602692 systemd[1]: Started cri-containerd-af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e.scope - libcontainer container af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e. Sep 4 17:41:18.635586 containerd[1451]: time="2024-09-04T17:41:18.635183086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p5xtt,Uid:85f59690-0e6e-4266-87c7-8b75f05bbda6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c89dfa6419989f3c34f40a3c705647b4be1bc39da2038bfa706a1ba7fec825b5\"" Sep 4 17:41:18.647186 containerd[1451]: time="2024-09-04T17:41:18.647042201Z" level=info msg="CreateContainer within sandbox \"c89dfa6419989f3c34f40a3c705647b4be1bc39da2038bfa706a1ba7fec825b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:41:18.659360 containerd[1451]: time="2024-09-04T17:41:18.659306666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzh4x,Uid:c1f3db8c-5e00-4941-95f8-34a46b0462eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\"" Sep 4 17:41:18.661770 containerd[1451]: time="2024-09-04T17:41:18.661618066Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 17:41:18.681917 containerd[1451]: time="2024-09-04T17:41:18.681661211Z" level=info msg="CreateContainer within sandbox \"c89dfa6419989f3c34f40a3c705647b4be1bc39da2038bfa706a1ba7fec825b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d6092ede41f8444e366be7e6ec16cd4b06f3823571f46bd955ec58dd6e454e0\"" Sep 4 17:41:18.684580 containerd[1451]: time="2024-09-04T17:41:18.683725270Z" level=info msg="StartContainer for \"7d6092ede41f8444e366be7e6ec16cd4b06f3823571f46bd955ec58dd6e454e0\"" Sep 4 17:41:18.715722 systemd[1]: Started cri-containerd-7d6092ede41f8444e366be7e6ec16cd4b06f3823571f46bd955ec58dd6e454e0.scope - libcontainer container 7d6092ede41f8444e366be7e6ec16cd4b06f3823571f46bd955ec58dd6e454e0. Sep 4 17:41:18.729460 containerd[1451]: time="2024-09-04T17:41:18.728826541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-hdv6s,Uid:a6921760-dfc8-4677-be61-4956071481ac,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:18.746921 containerd[1451]: time="2024-09-04T17:41:18.746870827Z" level=info msg="StartContainer for \"7d6092ede41f8444e366be7e6ec16cd4b06f3823571f46bd955ec58dd6e454e0\" returns successfully" Sep 4 17:41:18.780237 containerd[1451]: time="2024-09-04T17:41:18.780128686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:18.780368 containerd[1451]: time="2024-09-04T17:41:18.780271323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:18.784171 containerd[1451]: time="2024-09-04T17:41:18.783958244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:18.784171 containerd[1451]: time="2024-09-04T17:41:18.783995023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:18.807717 systemd[1]: Started cri-containerd-4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9.scope - libcontainer container 4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9. Sep 4 17:41:18.863819 containerd[1451]: time="2024-09-04T17:41:18.863761432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-hdv6s,Uid:a6921760-dfc8-4677-be61-4956071481ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9\"" Sep 4 17:41:24.881901 kubelet[2647]: I0904 17:41:24.881152 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-p5xtt" podStartSLOduration=6.881061109 podStartE2EDuration="6.881061109s" podCreationTimestamp="2024-09-04 17:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:19.212722957 +0000 UTC m=+15.485305019" watchObservedRunningTime="2024-09-04 17:41:24.881061109 +0000 UTC m=+21.153643211" Sep 4 17:41:27.327247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086715615.mount: Deactivated successfully. Sep 4 17:41:30.938171 containerd[1451]: time="2024-09-04T17:41:30.937785798Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:30.943593 containerd[1451]: time="2024-09-04T17:41:30.943468034Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735307" Sep 4 17:41:30.945600 containerd[1451]: time="2024-09-04T17:41:30.945450758Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:31.009044 containerd[1451]: time="2024-09-04T17:41:31.008715614Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.347039747s" Sep 4 17:41:31.009044 containerd[1451]: time="2024-09-04T17:41:31.008810397Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 17:41:31.011857 containerd[1451]: time="2024-09-04T17:41:31.011362565Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 17:41:31.017201 containerd[1451]: time="2024-09-04T17:41:31.016959138Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:41:31.112559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855006803.mount: Deactivated successfully. Sep 4 17:41:31.122717 containerd[1451]: time="2024-09-04T17:41:31.122583210Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\"" Sep 4 17:41:31.124359 containerd[1451]: time="2024-09-04T17:41:31.123851714Z" level=info msg="StartContainer for \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\"" Sep 4 17:41:31.283722 systemd[1]: Started cri-containerd-1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f.scope - libcontainer container 1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f. Sep 4 17:41:31.332429 containerd[1451]: time="2024-09-04T17:41:31.332381832Z" level=info msg="StartContainer for \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\" returns successfully" Sep 4 17:41:31.341432 systemd[1]: cri-containerd-1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f.scope: Deactivated successfully. Sep 4 17:41:32.037642 containerd[1451]: time="2024-09-04T17:41:31.980903500Z" level=info msg="shim disconnected" id=1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f namespace=k8s.io Sep 4 17:41:32.037642 containerd[1451]: time="2024-09-04T17:41:32.037478296Z" level=warning msg="cleaning up after shim disconnected" id=1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f namespace=k8s.io Sep 4 17:41:32.037642 containerd[1451]: time="2024-09-04T17:41:32.037495301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:41:32.109750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f-rootfs.mount: Deactivated successfully. Sep 4 17:41:32.863550 containerd[1451]: time="2024-09-04T17:41:32.863282324Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:41:32.912191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560688227.mount: Deactivated successfully. Sep 4 17:41:32.930619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1545958827.mount: Deactivated successfully. Sep 4 17:41:32.937604 containerd[1451]: time="2024-09-04T17:41:32.937491771Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\"" Sep 4 17:41:32.941851 containerd[1451]: time="2024-09-04T17:41:32.941824508Z" level=info msg="StartContainer for \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\"" Sep 4 17:41:32.997900 systemd[1]: Started cri-containerd-f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6.scope - libcontainer container f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6. Sep 4 17:41:33.067175 containerd[1451]: time="2024-09-04T17:41:33.067133686Z" level=info msg="StartContainer for \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\" returns successfully" Sep 4 17:41:33.071912 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:41:33.072205 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:41:33.072276 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:41:33.085349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:41:33.136296 systemd[1]: cri-containerd-f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6.scope: Deactivated successfully. Sep 4 17:41:33.161629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6-rootfs.mount: Deactivated successfully. Sep 4 17:41:33.201850 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:41:33.220644 containerd[1451]: time="2024-09-04T17:41:33.220493457Z" level=info msg="shim disconnected" id=f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6 namespace=k8s.io Sep 4 17:41:33.220824 containerd[1451]: time="2024-09-04T17:41:33.220660185Z" level=warning msg="cleaning up after shim disconnected" id=f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6 namespace=k8s.io Sep 4 17:41:33.220824 containerd[1451]: time="2024-09-04T17:41:33.220682992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:41:33.804242 containerd[1451]: time="2024-09-04T17:41:33.804183175Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:33.805372 containerd[1451]: time="2024-09-04T17:41:33.805181857Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907205" Sep 4 17:41:33.806530 containerd[1451]: time="2024-09-04T17:41:33.806453725Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:41:33.808548 containerd[1451]: time="2024-09-04T17:41:33.808000975Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.796567565s" Sep 4 17:41:33.808548 containerd[1451]: time="2024-09-04T17:41:33.808050997Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 17:41:33.811578 containerd[1451]: time="2024-09-04T17:41:33.811446286Z" level=info msg="CreateContainer within sandbox \"4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 17:41:33.840091 containerd[1451]: time="2024-09-04T17:41:33.840029159Z" level=info msg="CreateContainer within sandbox \"4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\"" Sep 4 17:41:33.840797 containerd[1451]: time="2024-09-04T17:41:33.840675724Z" level=info msg="StartContainer for \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\"" Sep 4 17:41:33.906109 containerd[1451]: time="2024-09-04T17:41:33.903983420Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:41:33.924710 systemd[1]: Started cri-containerd-12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4.scope - libcontainer container 12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4. Sep 4 17:41:33.963582 containerd[1451]: time="2024-09-04T17:41:33.963483936Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\"" Sep 4 17:41:33.969545 containerd[1451]: time="2024-09-04T17:41:33.966609848Z" level=info msg="StartContainer for \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\"" Sep 4 17:41:33.969545 containerd[1451]: time="2024-09-04T17:41:33.967591697Z" level=info msg="StartContainer for \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\" returns successfully" Sep 4 17:41:34.012864 systemd[1]: Started cri-containerd-4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3.scope - libcontainer container 4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3. Sep 4 17:41:34.060179 containerd[1451]: time="2024-09-04T17:41:34.060020919Z" level=info msg="StartContainer for \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\" returns successfully" Sep 4 17:41:34.061405 systemd[1]: cri-containerd-4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3.scope: Deactivated successfully. Sep 4 17:41:34.491159 containerd[1451]: time="2024-09-04T17:41:34.490689605Z" level=info msg="shim disconnected" id=4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3 namespace=k8s.io Sep 4 17:41:34.493863 containerd[1451]: time="2024-09-04T17:41:34.492445791Z" level=warning msg="cleaning up after shim disconnected" id=4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3 namespace=k8s.io Sep 4 17:41:34.493863 containerd[1451]: time="2024-09-04T17:41:34.492666530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:41:34.887534 containerd[1451]: time="2024-09-04T17:41:34.887465245Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:41:34.917710 containerd[1451]: time="2024-09-04T17:41:34.917662936Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\"" Sep 4 17:41:34.922550 containerd[1451]: time="2024-09-04T17:41:34.921703672Z" level=info msg="StartContainer for \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\"" Sep 4 17:41:34.988740 systemd[1]: Started cri-containerd-aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba.scope - libcontainer container aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba. Sep 4 17:41:35.060335 systemd[1]: cri-containerd-aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba.scope: Deactivated successfully. Sep 4 17:41:35.062974 containerd[1451]: time="2024-09-04T17:41:35.062848821Z" level=info msg="StartContainer for \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\" returns successfully" Sep 4 17:41:35.108960 systemd[1]: run-containerd-runc-k8s.io-aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba-runc.kohNsx.mount: Deactivated successfully. Sep 4 17:41:35.109383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba-rootfs.mount: Deactivated successfully. Sep 4 17:41:35.114022 containerd[1451]: time="2024-09-04T17:41:35.113936336Z" level=info msg="shim disconnected" id=aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba namespace=k8s.io Sep 4 17:41:35.114022 containerd[1451]: time="2024-09-04T17:41:35.114020897Z" level=warning msg="cleaning up after shim disconnected" id=aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba namespace=k8s.io Sep 4 17:41:35.114438 containerd[1451]: time="2024-09-04T17:41:35.114042572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:41:35.173555 kubelet[2647]: I0904 17:41:35.173380 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-hdv6s" podStartSLOduration=2.230026506 podStartE2EDuration="17.173321442s" podCreationTimestamp="2024-09-04 17:41:18 +0000 UTC" firstStartedPulling="2024-09-04 17:41:18.865424581 +0000 UTC m=+15.138006633" lastFinishedPulling="2024-09-04 17:41:33.808719517 +0000 UTC m=+30.081301569" observedRunningTime="2024-09-04 17:41:34.954018093 +0000 UTC m=+31.226600155" watchObservedRunningTime="2024-09-04 17:41:35.173321442 +0000 UTC m=+31.445903494" Sep 4 17:41:35.924125 containerd[1451]: time="2024-09-04T17:41:35.923046108Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:41:35.962667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092421733.mount: Deactivated successfully. Sep 4 17:41:35.973048 containerd[1451]: time="2024-09-04T17:41:35.972980365Z" level=info msg="CreateContainer within sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\"" Sep 4 17:41:35.976617 containerd[1451]: time="2024-09-04T17:41:35.973998049Z" level=info msg="StartContainer for \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\"" Sep 4 17:41:36.024824 systemd[1]: Started cri-containerd-7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776.scope - libcontainer container 7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776. Sep 4 17:41:36.086668 containerd[1451]: time="2024-09-04T17:41:36.086567113Z" level=info msg="StartContainer for \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\" returns successfully" Sep 4 17:41:36.358572 kubelet[2647]: I0904 17:41:36.357486 2647 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:41:36.431428 kubelet[2647]: I0904 17:41:36.431339 2647 topology_manager.go:215] "Topology Admit Handler" podUID="b7927dd5-7723-4144-986a-e9e6b5ccba4f" podNamespace="kube-system" podName="coredns-76f75df574-7p9xw" Sep 4 17:41:36.445724 kubelet[2647]: I0904 17:41:36.444116 2647 topology_manager.go:215] "Topology Admit Handler" podUID="9e07e6c6-4294-4d00-b398-ff6034062c8e" podNamespace="kube-system" podName="coredns-76f75df574-ldch8" Sep 4 17:41:36.445724 kubelet[2647]: W0904 17:41:36.444961 2647 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3975-2-1-d-945344e89d.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-2-1-d-945344e89d.novalocal' and this object Sep 4 17:41:36.445724 kubelet[2647]: E0904 17:41:36.445007 2647 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3975-2-1-d-945344e89d.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-2-1-d-945344e89d.novalocal' and this object Sep 4 17:41:36.458097 systemd[1]: Created slice kubepods-burstable-podb7927dd5_7723_4144_986a_e9e6b5ccba4f.slice - libcontainer container kubepods-burstable-podb7927dd5_7723_4144_986a_e9e6b5ccba4f.slice. Sep 4 17:41:36.472847 systemd[1]: Created slice kubepods-burstable-pod9e07e6c6_4294_4d00_b398_ff6034062c8e.slice - libcontainer container kubepods-burstable-pod9e07e6c6_4294_4d00_b398_ff6034062c8e.slice. Sep 4 17:41:36.608485 kubelet[2647]: I0904 17:41:36.608391 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e07e6c6-4294-4d00-b398-ff6034062c8e-config-volume\") pod \"coredns-76f75df574-ldch8\" (UID: \"9e07e6c6-4294-4d00-b398-ff6034062c8e\") " pod="kube-system/coredns-76f75df574-ldch8" Sep 4 17:41:36.608485 kubelet[2647]: I0904 17:41:36.608499 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzfnn\" (UniqueName: \"kubernetes.io/projected/b7927dd5-7723-4144-986a-e9e6b5ccba4f-kube-api-access-jzfnn\") pod \"coredns-76f75df574-7p9xw\" (UID: \"b7927dd5-7723-4144-986a-e9e6b5ccba4f\") " pod="kube-system/coredns-76f75df574-7p9xw" Sep 4 17:41:36.609636 kubelet[2647]: I0904 17:41:36.608612 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nfqx\" (UniqueName: \"kubernetes.io/projected/9e07e6c6-4294-4d00-b398-ff6034062c8e-kube-api-access-7nfqx\") pod \"coredns-76f75df574-ldch8\" (UID: \"9e07e6c6-4294-4d00-b398-ff6034062c8e\") " pod="kube-system/coredns-76f75df574-ldch8" Sep 4 17:41:36.609636 kubelet[2647]: I0904 17:41:36.608679 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7927dd5-7723-4144-986a-e9e6b5ccba4f-config-volume\") pod \"coredns-76f75df574-7p9xw\" (UID: \"b7927dd5-7723-4144-986a-e9e6b5ccba4f\") " pod="kube-system/coredns-76f75df574-7p9xw" Sep 4 17:41:37.714881 kubelet[2647]: E0904 17:41:37.714147 2647 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:41:37.714881 kubelet[2647]: E0904 17:41:37.714435 2647 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7927dd5-7723-4144-986a-e9e6b5ccba4f-config-volume podName:b7927dd5-7723-4144-986a-e9e6b5ccba4f nodeName:}" failed. No retries permitted until 2024-09-04 17:41:38.21437363 +0000 UTC m=+34.486955682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b7927dd5-7723-4144-986a-e9e6b5ccba4f-config-volume") pod "coredns-76f75df574-7p9xw" (UID: "b7927dd5-7723-4144-986a-e9e6b5ccba4f") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:41:37.715945 kubelet[2647]: E0904 17:41:37.714147 2647 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:41:37.715945 kubelet[2647]: E0904 17:41:37.714970 2647 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e07e6c6-4294-4d00-b398-ff6034062c8e-config-volume podName:9e07e6c6-4294-4d00-b398-ff6034062c8e nodeName:}" failed. No retries permitted until 2024-09-04 17:41:38.214949934 +0000 UTC m=+34.487531986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9e07e6c6-4294-4d00-b398-ff6034062c8e-config-volume") pod "coredns-76f75df574-ldch8" (UID: "9e07e6c6-4294-4d00-b398-ff6034062c8e") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:41:38.270220 containerd[1451]: time="2024-09-04T17:41:38.270103188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7p9xw,Uid:b7927dd5-7723-4144-986a-e9e6b5ccba4f,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:38.277923 containerd[1451]: time="2024-09-04T17:41:38.277555436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ldch8,Uid:9e07e6c6-4294-4d00-b398-ff6034062c8e,Namespace:kube-system,Attempt:0,}" Sep 4 17:41:39.542830 systemd-networkd[1367]: cilium_host: Link UP Sep 4 17:41:39.543222 systemd-networkd[1367]: cilium_net: Link UP Sep 4 17:41:39.547776 systemd-networkd[1367]: cilium_net: Gained carrier Sep 4 17:41:39.548276 systemd-networkd[1367]: cilium_host: Gained carrier Sep 4 17:41:40.030143 systemd-networkd[1367]: cilium_vxlan: Link UP Sep 4 17:41:40.030163 systemd-networkd[1367]: cilium_vxlan: Gained carrier Sep 4 17:41:40.339853 systemd-networkd[1367]: cilium_host: Gained IPv6LL Sep 4 17:41:40.468060 systemd-networkd[1367]: cilium_net: Gained IPv6LL Sep 4 17:41:41.016567 kernel: NET: Registered PF_ALG protocol family Sep 4 17:41:41.876847 systemd-networkd[1367]: cilium_vxlan: Gained IPv6LL Sep 4 17:41:42.163784 systemd-networkd[1367]: lxc_health: Link UP Sep 4 17:41:42.177446 systemd-networkd[1367]: lxc_health: Gained carrier Sep 4 17:41:42.425966 systemd-networkd[1367]: lxcd8ef85c80927: Link UP Sep 4 17:41:42.430575 kernel: eth0: renamed from tmp0b833 Sep 4 17:41:42.449090 systemd-networkd[1367]: lxcd8ef85c80927: Gained carrier Sep 4 17:41:42.450627 systemd-networkd[1367]: lxc6acc5c34bbc8: Link UP Sep 4 17:41:42.456684 kernel: eth0: renamed from tmpbbd61 Sep 4 17:41:42.462020 systemd-networkd[1367]: lxc6acc5c34bbc8: Gained carrier Sep 4 17:41:42.566642 kubelet[2647]: I0904 17:41:42.566603 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pzh4x" podStartSLOduration=12.217127893 podStartE2EDuration="24.566541072s" podCreationTimestamp="2024-09-04 17:41:18 +0000 UTC" firstStartedPulling="2024-09-04 17:41:18.660646141 +0000 UTC m=+14.933228203" lastFinishedPulling="2024-09-04 17:41:31.01005928 +0000 UTC m=+27.282641382" observedRunningTime="2024-09-04 17:41:37.013025313 +0000 UTC m=+33.285607465" watchObservedRunningTime="2024-09-04 17:41:42.566541072 +0000 UTC m=+38.839123124" Sep 4 17:41:43.732276 systemd-networkd[1367]: lxc6acc5c34bbc8: Gained IPv6LL Sep 4 17:41:43.923794 systemd-networkd[1367]: lxc_health: Gained IPv6LL Sep 4 17:41:44.371942 systemd-networkd[1367]: lxcd8ef85c80927: Gained IPv6LL Sep 4 17:41:47.574228 containerd[1451]: time="2024-09-04T17:41:47.573769825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:47.574228 containerd[1451]: time="2024-09-04T17:41:47.573876438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:47.574228 containerd[1451]: time="2024-09-04T17:41:47.573954684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:47.574228 containerd[1451]: time="2024-09-04T17:41:47.573998612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:47.620747 systemd[1]: Started cri-containerd-0b83334ae1c9a79c3f2c9e911a84c059b7a7034513fd021c827684738a9ef05c.scope - libcontainer container 0b83334ae1c9a79c3f2c9e911a84c059b7a7034513fd021c827684738a9ef05c. Sep 4 17:41:47.666368 containerd[1451]: time="2024-09-04T17:41:47.665537671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:41:47.666368 containerd[1451]: time="2024-09-04T17:41:47.665624354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:47.666368 containerd[1451]: time="2024-09-04T17:41:47.665656979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:41:47.666368 containerd[1451]: time="2024-09-04T17:41:47.665675276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:41:47.705708 systemd[1]: Started cri-containerd-bbd6145a77f1cb58c79bb8a26622eea4aaa52476e7e225665ac89634d453830f.scope - libcontainer container bbd6145a77f1cb58c79bb8a26622eea4aaa52476e7e225665ac89634d453830f. Sep 4 17:41:47.744444 containerd[1451]: time="2024-09-04T17:41:47.744213739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7p9xw,Uid:b7927dd5-7723-4144-986a-e9e6b5ccba4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b83334ae1c9a79c3f2c9e911a84c059b7a7034513fd021c827684738a9ef05c\"" Sep 4 17:41:47.752560 containerd[1451]: time="2024-09-04T17:41:47.752287796Z" level=info msg="CreateContainer within sandbox \"0b83334ae1c9a79c3f2c9e911a84c059b7a7034513fd021c827684738a9ef05c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:41:47.787951 containerd[1451]: time="2024-09-04T17:41:47.787891286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ldch8,Uid:9e07e6c6-4294-4d00-b398-ff6034062c8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbd6145a77f1cb58c79bb8a26622eea4aaa52476e7e225665ac89634d453830f\"" Sep 4 17:41:47.791340 containerd[1451]: time="2024-09-04T17:41:47.790776531Z" level=info msg="CreateContainer within sandbox \"0b83334ae1c9a79c3f2c9e911a84c059b7a7034513fd021c827684738a9ef05c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bc8a5b59b258355cb2bdc513c7255244b79c2ce696ca2c07a3fe51aa16dd954\"" Sep 4 17:41:47.791789 containerd[1451]: time="2024-09-04T17:41:47.791470716Z" level=info msg="StartContainer for \"5bc8a5b59b258355cb2bdc513c7255244b79c2ce696ca2c07a3fe51aa16dd954\"" Sep 4 17:41:47.799366 containerd[1451]: time="2024-09-04T17:41:47.799265857Z" level=info msg="CreateContainer within sandbox \"bbd6145a77f1cb58c79bb8a26622eea4aaa52476e7e225665ac89634d453830f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:41:47.834880 systemd[1]: Started cri-containerd-5bc8a5b59b258355cb2bdc513c7255244b79c2ce696ca2c07a3fe51aa16dd954.scope - libcontainer container 5bc8a5b59b258355cb2bdc513c7255244b79c2ce696ca2c07a3fe51aa16dd954. Sep 4 17:41:47.844014 containerd[1451]: time="2024-09-04T17:41:47.843682308Z" level=info msg="CreateContainer within sandbox \"bbd6145a77f1cb58c79bb8a26622eea4aaa52476e7e225665ac89634d453830f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f2654d607753a4c6eaba09cb40485caf9909569adf88e0572a782da97cb2f17\"" Sep 4 17:41:47.845470 containerd[1451]: time="2024-09-04T17:41:47.845363252Z" level=info msg="StartContainer for \"8f2654d607753a4c6eaba09cb40485caf9909569adf88e0572a782da97cb2f17\"" Sep 4 17:41:47.883239 systemd[1]: Started cri-containerd-8f2654d607753a4c6eaba09cb40485caf9909569adf88e0572a782da97cb2f17.scope - libcontainer container 8f2654d607753a4c6eaba09cb40485caf9909569adf88e0572a782da97cb2f17. Sep 4 17:41:47.910561 containerd[1451]: time="2024-09-04T17:41:47.909354804Z" level=info msg="StartContainer for \"5bc8a5b59b258355cb2bdc513c7255244b79c2ce696ca2c07a3fe51aa16dd954\" returns successfully" Sep 4 17:41:47.933391 containerd[1451]: time="2024-09-04T17:41:47.933250636Z" level=info msg="StartContainer for \"8f2654d607753a4c6eaba09cb40485caf9909569adf88e0572a782da97cb2f17\" returns successfully" Sep 4 17:41:47.983028 kubelet[2647]: I0904 17:41:47.982965 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7p9xw" podStartSLOduration=29.982907401 podStartE2EDuration="29.982907401s" podCreationTimestamp="2024-09-04 17:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:47.981304984 +0000 UTC m=+44.253887056" watchObservedRunningTime="2024-09-04 17:41:47.982907401 +0000 UTC m=+44.255489463" Sep 4 17:41:47.983871 kubelet[2647]: I0904 17:41:47.983194 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ldch8" podStartSLOduration=29.98316764 podStartE2EDuration="29.98316764s" podCreationTimestamp="2024-09-04 17:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:41:47.964463515 +0000 UTC m=+44.237045587" watchObservedRunningTime="2024-09-04 17:41:47.98316764 +0000 UTC m=+44.255749692" Sep 4 17:41:48.597232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349966432.mount: Deactivated successfully. Sep 4 17:42:16.320966 systemd[1]: Started sshd@9-172.24.4.44:22-172.24.4.1:44218.service - OpenSSH per-connection server daemon (172.24.4.1:44218). Sep 4 17:42:17.759266 sshd[4013]: Accepted publickey for core from 172.24.4.1 port 44218 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:17.763493 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:17.776638 systemd-logind[1430]: New session 12 of user core. Sep 4 17:42:17.785872 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:42:19.713167 sshd[4013]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:19.721208 systemd[1]: sshd@9-172.24.4.44:22-172.24.4.1:44218.service: Deactivated successfully. Sep 4 17:42:19.726933 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:42:19.732975 systemd-logind[1430]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:42:19.736136 systemd-logind[1430]: Removed session 12. Sep 4 17:42:24.735077 systemd[1]: Started sshd@10-172.24.4.44:22-172.24.4.1:32990.service - OpenSSH per-connection server daemon (172.24.4.1:32990). Sep 4 17:42:26.225737 sshd[4030]: Accepted publickey for core from 172.24.4.1 port 32990 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:26.228812 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:26.240357 systemd-logind[1430]: New session 13 of user core. Sep 4 17:42:26.246859 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:42:26.983119 sshd[4030]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:26.991694 systemd[1]: sshd@10-172.24.4.44:22-172.24.4.1:32990.service: Deactivated successfully. Sep 4 17:42:26.997360 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:42:26.999653 systemd-logind[1430]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:42:27.002647 systemd-logind[1430]: Removed session 13. Sep 4 17:42:32.007152 systemd[1]: Started sshd@11-172.24.4.44:22-172.24.4.1:33004.service - OpenSSH per-connection server daemon (172.24.4.1:33004). Sep 4 17:42:33.566034 sshd[4044]: Accepted publickey for core from 172.24.4.1 port 33004 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:33.568447 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:33.582452 systemd-logind[1430]: New session 14 of user core. Sep 4 17:42:33.588769 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:42:34.249641 sshd[4044]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:34.260975 systemd[1]: sshd@11-172.24.4.44:22-172.24.4.1:33004.service: Deactivated successfully. Sep 4 17:42:34.270479 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:42:34.274847 systemd-logind[1430]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:42:34.281977 systemd[1]: Started sshd@12-172.24.4.44:22-172.24.4.1:33014.service - OpenSSH per-connection server daemon (172.24.4.1:33014). Sep 4 17:42:34.285019 systemd-logind[1430]: Removed session 14. Sep 4 17:42:35.489332 sshd[4058]: Accepted publickey for core from 172.24.4.1 port 33014 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:35.492461 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:35.505370 systemd-logind[1430]: New session 15 of user core. Sep 4 17:42:35.511891 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:42:36.429736 sshd[4058]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:36.438588 systemd-logind[1430]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:42:36.438862 systemd[1]: sshd@12-172.24.4.44:22-172.24.4.1:33014.service: Deactivated successfully. Sep 4 17:42:36.445211 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:42:36.456246 systemd[1]: Started sshd@13-172.24.4.44:22-172.24.4.1:58108.service - OpenSSH per-connection server daemon (172.24.4.1:58108). Sep 4 17:42:36.458244 systemd-logind[1430]: Removed session 15. Sep 4 17:42:37.893622 sshd[4069]: Accepted publickey for core from 172.24.4.1 port 58108 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:37.896389 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:37.906421 systemd-logind[1430]: New session 16 of user core. Sep 4 17:42:37.913877 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:42:38.699627 sshd[4069]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:38.705191 systemd[1]: sshd@13-172.24.4.44:22-172.24.4.1:58108.service: Deactivated successfully. Sep 4 17:42:38.709917 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:42:38.713867 systemd-logind[1430]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:42:38.716364 systemd-logind[1430]: Removed session 16. Sep 4 17:42:43.726313 systemd[1]: Started sshd@14-172.24.4.44:22-172.24.4.1:58118.service - OpenSSH per-connection server daemon (172.24.4.1:58118). Sep 4 17:42:45.702420 sshd[4083]: Accepted publickey for core from 172.24.4.1 port 58118 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:45.705484 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:45.715963 systemd-logind[1430]: New session 17 of user core. Sep 4 17:42:45.725977 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:42:46.633943 sshd[4083]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:46.642956 systemd[1]: sshd@14-172.24.4.44:22-172.24.4.1:58118.service: Deactivated successfully. Sep 4 17:42:46.648666 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:42:46.651320 systemd-logind[1430]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:42:46.654809 systemd-logind[1430]: Removed session 17. Sep 4 17:42:51.664087 systemd[1]: Started sshd@15-172.24.4.44:22-172.24.4.1:43000.service - OpenSSH per-connection server daemon (172.24.4.1:43000). Sep 4 17:42:52.777911 sshd[4098]: Accepted publickey for core from 172.24.4.1 port 43000 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:52.780446 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:52.789457 systemd-logind[1430]: New session 18 of user core. Sep 4 17:42:52.798921 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:42:53.737050 sshd[4098]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:53.746193 systemd[1]: sshd@15-172.24.4.44:22-172.24.4.1:43000.service: Deactivated successfully. Sep 4 17:42:53.750786 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:42:53.756326 systemd-logind[1430]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:42:53.764453 systemd[1]: Started sshd@16-172.24.4.44:22-172.24.4.1:43008.service - OpenSSH per-connection server daemon (172.24.4.1:43008). Sep 4 17:42:53.768642 systemd-logind[1430]: Removed session 18. Sep 4 17:42:55.262138 sshd[4111]: Accepted publickey for core from 172.24.4.1 port 43008 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:55.265143 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:55.283165 systemd-logind[1430]: New session 19 of user core. Sep 4 17:42:55.287749 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:42:56.498967 sshd[4111]: pam_unix(sshd:session): session closed for user core Sep 4 17:42:56.507137 systemd[1]: sshd@16-172.24.4.44:22-172.24.4.1:43008.service: Deactivated successfully. Sep 4 17:42:56.509131 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:42:56.511935 systemd-logind[1430]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:42:56.517804 systemd[1]: Started sshd@17-172.24.4.44:22-172.24.4.1:53070.service - OpenSSH per-connection server daemon (172.24.4.1:53070). Sep 4 17:42:56.519571 systemd-logind[1430]: Removed session 19. Sep 4 17:42:57.833838 sshd[4122]: Accepted publickey for core from 172.24.4.1 port 53070 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:42:57.836921 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:42:57.849465 systemd-logind[1430]: New session 20 of user core. Sep 4 17:42:57.855882 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:43:00.591717 sshd[4122]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:00.609867 systemd[1]: Started sshd@18-172.24.4.44:22-172.24.4.1:53078.service - OpenSSH per-connection server daemon (172.24.4.1:53078). Sep 4 17:43:00.616139 systemd[1]: sshd@17-172.24.4.44:22-172.24.4.1:53070.service: Deactivated successfully. Sep 4 17:43:00.619455 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:43:00.626057 systemd-logind[1430]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:43:00.629680 systemd-logind[1430]: Removed session 20. Sep 4 17:43:01.863575 sshd[4139]: Accepted publickey for core from 172.24.4.1 port 53078 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:01.867848 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:01.882009 systemd-logind[1430]: New session 21 of user core. Sep 4 17:43:01.893129 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:43:02.708167 sshd[4139]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:02.717809 systemd[1]: sshd@18-172.24.4.44:22-172.24.4.1:53078.service: Deactivated successfully. Sep 4 17:43:02.720409 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:43:02.722556 systemd-logind[1430]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:43:02.730106 systemd[1]: Started sshd@19-172.24.4.44:22-172.24.4.1:53094.service - OpenSSH per-connection server daemon (172.24.4.1:53094). Sep 4 17:43:02.733585 systemd-logind[1430]: Removed session 21. Sep 4 17:43:04.079792 sshd[4152]: Accepted publickey for core from 172.24.4.1 port 53094 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:04.083156 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:04.093657 systemd-logind[1430]: New session 22 of user core. Sep 4 17:43:04.102881 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:43:04.847774 sshd[4152]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:04.856119 systemd[1]: sshd@19-172.24.4.44:22-172.24.4.1:53094.service: Deactivated successfully. Sep 4 17:43:04.863443 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:43:04.868698 systemd-logind[1430]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:43:04.871262 systemd-logind[1430]: Removed session 22. Sep 4 17:43:09.877800 systemd[1]: Started sshd@20-172.24.4.44:22-172.24.4.1:50256.service - OpenSSH per-connection server daemon (172.24.4.1:50256). Sep 4 17:43:11.397804 sshd[4170]: Accepted publickey for core from 172.24.4.1 port 50256 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:11.401047 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:11.411978 systemd-logind[1430]: New session 23 of user core. Sep 4 17:43:11.418841 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:43:12.207267 sshd[4170]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:12.215086 systemd[1]: sshd@20-172.24.4.44:22-172.24.4.1:50256.service: Deactivated successfully. Sep 4 17:43:12.221835 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:43:12.224328 systemd-logind[1430]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:43:12.227977 systemd-logind[1430]: Removed session 23. Sep 4 17:43:17.230033 systemd[1]: Started sshd@21-172.24.4.44:22-172.24.4.1:57890.service - OpenSSH per-connection server daemon (172.24.4.1:57890). Sep 4 17:43:18.756131 sshd[4183]: Accepted publickey for core from 172.24.4.1 port 57890 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:18.759111 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:18.770920 systemd-logind[1430]: New session 24 of user core. Sep 4 17:43:18.776202 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:43:19.481990 sshd[4183]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:19.489436 systemd[1]: sshd@21-172.24.4.44:22-172.24.4.1:57890.service: Deactivated successfully. Sep 4 17:43:19.494048 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:43:19.496142 systemd-logind[1430]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:43:19.499825 systemd-logind[1430]: Removed session 24. Sep 4 17:43:24.505159 systemd[1]: Started sshd@22-172.24.4.44:22-172.24.4.1:57896.service - OpenSSH per-connection server daemon (172.24.4.1:57896). Sep 4 17:43:25.838461 sshd[4199]: Accepted publickey for core from 172.24.4.1 port 57896 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:25.841571 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:25.854042 systemd-logind[1430]: New session 25 of user core. Sep 4 17:43:25.860945 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:43:26.598634 sshd[4199]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:26.608500 systemd[1]: sshd@22-172.24.4.44:22-172.24.4.1:57896.service: Deactivated successfully. Sep 4 17:43:26.614703 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:43:26.620426 systemd-logind[1430]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:43:26.631263 systemd[1]: Started sshd@23-172.24.4.44:22-172.24.4.1:51478.service - OpenSSH per-connection server daemon (172.24.4.1:51478). Sep 4 17:43:26.635205 systemd-logind[1430]: Removed session 25. Sep 4 17:43:27.994363 sshd[4212]: Accepted publickey for core from 172.24.4.1 port 51478 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:27.997374 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:28.007760 systemd-logind[1430]: New session 26 of user core. Sep 4 17:43:28.012861 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:43:29.923814 systemd[1]: run-containerd-runc-k8s.io-7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776-runc.92yddx.mount: Deactivated successfully. Sep 4 17:43:29.959463 containerd[1451]: time="2024-09-04T17:43:29.959338013Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:43:29.999988 containerd[1451]: time="2024-09-04T17:43:29.999762101Z" level=info msg="StopContainer for \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\" with timeout 2 (s)" Sep 4 17:43:30.002057 containerd[1451]: time="2024-09-04T17:43:30.000362432Z" level=info msg="StopContainer for \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\" with timeout 30 (s)" Sep 4 17:43:30.009998 containerd[1451]: time="2024-09-04T17:43:30.009961722Z" level=info msg="Stop container \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\" with signal terminated" Sep 4 17:43:30.010220 containerd[1451]: time="2024-09-04T17:43:30.010175986Z" level=info msg="Stop container \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\" with signal terminated" Sep 4 17:43:30.024902 systemd-networkd[1367]: lxc_health: Link DOWN Sep 4 17:43:30.026421 systemd-networkd[1367]: lxc_health: Lost carrier Sep 4 17:43:30.033823 systemd[1]: cri-containerd-12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4.scope: Deactivated successfully. Sep 4 17:43:30.049229 systemd[1]: cri-containerd-7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776.scope: Deactivated successfully. Sep 4 17:43:30.049502 systemd[1]: cri-containerd-7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776.scope: Consumed 9.738s CPU time. Sep 4 17:43:30.083201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4-rootfs.mount: Deactivated successfully. Sep 4 17:43:30.087833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776-rootfs.mount: Deactivated successfully. Sep 4 17:43:30.091275 containerd[1451]: time="2024-09-04T17:43:30.091190462Z" level=info msg="shim disconnected" id=12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4 namespace=k8s.io Sep 4 17:43:30.091614 containerd[1451]: time="2024-09-04T17:43:30.091407161Z" level=warning msg="cleaning up after shim disconnected" id=12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4 namespace=k8s.io Sep 4 17:43:30.091614 containerd[1451]: time="2024-09-04T17:43:30.091426899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:43:30.092922 containerd[1451]: time="2024-09-04T17:43:30.092727073Z" level=info msg="shim disconnected" id=7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776 namespace=k8s.io Sep 4 17:43:30.092922 containerd[1451]: time="2024-09-04T17:43:30.092786658Z" level=warning msg="cleaning up after shim disconnected" id=7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776 namespace=k8s.io Sep 4 17:43:30.092922 containerd[1451]: time="2024-09-04T17:43:30.092797610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:43:30.114667 containerd[1451]: time="2024-09-04T17:43:30.114582672Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:43:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:43:30.134927 containerd[1451]: time="2024-09-04T17:43:30.134620025Z" level=info msg="StopContainer for \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\" returns successfully" Sep 4 17:43:30.135997 containerd[1451]: time="2024-09-04T17:43:30.135664094Z" level=info msg="StopPodSandbox for \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\"" Sep 4 17:43:30.137286 containerd[1451]: time="2024-09-04T17:43:30.136209078Z" level=info msg="StopContainer for \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\" returns successfully" Sep 4 17:43:30.137286 containerd[1451]: time="2024-09-04T17:43:30.136740766Z" level=info msg="StopPodSandbox for \"4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9\"" Sep 4 17:43:30.139585 containerd[1451]: time="2024-09-04T17:43:30.136784510Z" level=info msg="Container to stop \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:43:30.143241 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9-shm.mount: Deactivated successfully. Sep 4 17:43:30.144695 containerd[1451]: time="2024-09-04T17:43:30.135701666Z" level=info msg="Container to stop \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:43:30.144695 containerd[1451]: time="2024-09-04T17:43:30.144691950Z" level=info msg="Container to stop \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:43:30.144801 containerd[1451]: time="2024-09-04T17:43:30.144707139Z" level=info msg="Container to stop \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:43:30.144801 containerd[1451]: time="2024-09-04T17:43:30.144719904Z" level=info msg="Container to stop \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:43:30.144801 containerd[1451]: time="2024-09-04T17:43:30.144732848Z" level=info msg="Container to stop \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:43:30.155733 systemd[1]: cri-containerd-af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e.scope: Deactivated successfully. Sep 4 17:43:30.166567 systemd[1]: cri-containerd-4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9.scope: Deactivated successfully. Sep 4 17:43:30.206697 containerd[1451]: time="2024-09-04T17:43:30.206476153Z" level=info msg="shim disconnected" id=af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e namespace=k8s.io Sep 4 17:43:30.206697 containerd[1451]: time="2024-09-04T17:43:30.206575935Z" level=warning msg="cleaning up after shim disconnected" id=af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e namespace=k8s.io Sep 4 17:43:30.206697 containerd[1451]: time="2024-09-04T17:43:30.206602617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:43:30.214764 containerd[1451]: time="2024-09-04T17:43:30.214688852Z" level=info msg="shim disconnected" id=4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9 namespace=k8s.io Sep 4 17:43:30.215398 containerd[1451]: time="2024-09-04T17:43:30.214742516Z" level=warning msg="cleaning up after shim disconnected" id=4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9 namespace=k8s.io Sep 4 17:43:30.215398 containerd[1451]: time="2024-09-04T17:43:30.215144033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:43:30.229707 containerd[1451]: time="2024-09-04T17:43:30.229096180Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:43:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:43:30.231020 containerd[1451]: time="2024-09-04T17:43:30.230894808Z" level=info msg="TearDown network for sandbox \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" successfully" Sep 4 17:43:30.231020 containerd[1451]: time="2024-09-04T17:43:30.230923423Z" level=info msg="StopPodSandbox for \"af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e\" returns successfully" Sep 4 17:43:30.246987 containerd[1451]: time="2024-09-04T17:43:30.246905034Z" level=info msg="TearDown network for sandbox \"4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9\" successfully" Sep 4 17:43:30.246987 containerd[1451]: time="2024-09-04T17:43:30.246944761Z" level=info msg="StopPodSandbox for \"4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9\" returns successfully" Sep 4 17:43:30.320326 kubelet[2647]: I0904 17:43:30.318866 2647 scope.go:117] "RemoveContainer" containerID="7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776" Sep 4 17:43:30.322065 kubelet[2647]: I0904 17:43:30.321163 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-bpf-maps\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.322065 kubelet[2647]: I0904 17:43:30.321233 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-hostproc\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.322065 kubelet[2647]: I0904 17:43:30.321274 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-host-proc-sys-net\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.322065 kubelet[2647]: I0904 17:43:30.321330 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9q54\" (UniqueName: \"kubernetes.io/projected/c1f3db8c-5e00-4941-95f8-34a46b0462eb-kube-api-access-v9q54\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.322065 kubelet[2647]: I0904 17:43:30.321369 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-run\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.322065 kubelet[2647]: I0904 17:43:30.321409 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-host-proc-sys-kernel\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.323261 kubelet[2647]: I0904 17:43:30.321449 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-cgroup\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.323261 kubelet[2647]: I0904 17:43:30.321494 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-config-path\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.323261 kubelet[2647]: I0904 17:43:30.321618 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-xtables-lock\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.323261 kubelet[2647]: I0904 17:43:30.321674 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1f3db8c-5e00-4941-95f8-34a46b0462eb-clustermesh-secrets\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.323261 kubelet[2647]: I0904 17:43:30.321711 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cni-path\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.323261 kubelet[2647]: I0904 17:43:30.321750 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1f3db8c-5e00-4941-95f8-34a46b0462eb-hubble-tls\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.323732 containerd[1451]: time="2024-09-04T17:43:30.322377985Z" level=info msg="RemoveContainer for \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\"" Sep 4 17:43:30.323793 kubelet[2647]: I0904 17:43:30.321792 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-etc-cni-netd\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.323793 kubelet[2647]: I0904 17:43:30.321830 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-lib-modules\") pod \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\" (UID: \"c1f3db8c-5e00-4941-95f8-34a46b0462eb\") " Sep 4 17:43:30.330616 kubelet[2647]: I0904 17:43:30.325248 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.330616 kubelet[2647]: I0904 17:43:30.329543 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.330616 kubelet[2647]: I0904 17:43:30.329568 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-hostproc" (OuterVolumeSpecName: "hostproc") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.330616 kubelet[2647]: I0904 17:43:30.329587 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.330616 kubelet[2647]: I0904 17:43:30.327437 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.340919 kubelet[2647]: I0904 17:43:30.340862 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.340919 kubelet[2647]: I0904 17:43:30.340922 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.341237 kubelet[2647]: I0904 17:43:30.341111 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cni-path" (OuterVolumeSpecName: "cni-path") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.341237 kubelet[2647]: I0904 17:43:30.341135 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.345607 kubelet[2647]: I0904 17:43:30.345038 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:43:30.356085 kubelet[2647]: I0904 17:43:30.356018 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f3db8c-5e00-4941-95f8-34a46b0462eb-kube-api-access-v9q54" (OuterVolumeSpecName: "kube-api-access-v9q54") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "kube-api-access-v9q54". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:43:30.356258 kubelet[2647]: I0904 17:43:30.356189 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f3db8c-5e00-4941-95f8-34a46b0462eb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:43:30.357162 kubelet[2647]: I0904 17:43:30.357060 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f3db8c-5e00-4941-95f8-34a46b0462eb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:43:30.358615 kubelet[2647]: I0904 17:43:30.358204 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1f3db8c-5e00-4941-95f8-34a46b0462eb" (UID: "c1f3db8c-5e00-4941-95f8-34a46b0462eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:43:30.360256 containerd[1451]: time="2024-09-04T17:43:30.360193032Z" level=info msg="RemoveContainer for \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\" returns successfully" Sep 4 17:43:30.361013 kubelet[2647]: I0904 17:43:30.360856 2647 scope.go:117] "RemoveContainer" containerID="aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba" Sep 4 17:43:30.362996 containerd[1451]: time="2024-09-04T17:43:30.362949823Z" level=info msg="RemoveContainer for \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\"" Sep 4 17:43:30.373342 containerd[1451]: time="2024-09-04T17:43:30.373221504Z" level=info msg="RemoveContainer for \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\" returns successfully" Sep 4 17:43:30.373822 kubelet[2647]: I0904 17:43:30.373786 2647 scope.go:117] "RemoveContainer" containerID="4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3" Sep 4 17:43:30.375676 containerd[1451]: time="2024-09-04T17:43:30.375530969Z" level=info msg="RemoveContainer for \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\"" Sep 4 17:43:30.380352 containerd[1451]: time="2024-09-04T17:43:30.380308417Z" level=info msg="RemoveContainer for \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\" returns successfully" Sep 4 17:43:30.380730 kubelet[2647]: I0904 17:43:30.380702 2647 scope.go:117] "RemoveContainer" containerID="f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6" Sep 4 17:43:30.382468 containerd[1451]: time="2024-09-04T17:43:30.382233188Z" level=info msg="RemoveContainer for \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\"" Sep 4 17:43:30.385943 containerd[1451]: time="2024-09-04T17:43:30.385870481Z" level=info msg="RemoveContainer for \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\" returns successfully" Sep 4 17:43:30.386314 kubelet[2647]: I0904 17:43:30.386234 2647 scope.go:117] "RemoveContainer" containerID="1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f" Sep 4 17:43:30.388152 containerd[1451]: time="2024-09-04T17:43:30.387793259Z" level=info msg="RemoveContainer for \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\"" Sep 4 17:43:30.391246 containerd[1451]: time="2024-09-04T17:43:30.391208843Z" level=info msg="RemoveContainer for \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\" returns successfully" Sep 4 17:43:30.391562 kubelet[2647]: I0904 17:43:30.391539 2647 scope.go:117] "RemoveContainer" containerID="7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776" Sep 4 17:43:30.392023 containerd[1451]: time="2024-09-04T17:43:30.391925159Z" level=error msg="ContainerStatus for \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\": not found" Sep 4 17:43:30.408882 kubelet[2647]: E0904 17:43:30.408829 2647 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\": not found" containerID="7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776" Sep 4 17:43:30.423142 kubelet[2647]: I0904 17:43:30.422400 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6921760-dfc8-4677-be61-4956071481ac-cilium-config-path\") pod \"a6921760-dfc8-4677-be61-4956071481ac\" (UID: \"a6921760-dfc8-4677-be61-4956071481ac\") " Sep 4 17:43:30.423142 kubelet[2647]: I0904 17:43:30.422532 2647 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7wpq\" (UniqueName: \"kubernetes.io/projected/a6921760-dfc8-4677-be61-4956071481ac-kube-api-access-l7wpq\") pod \"a6921760-dfc8-4677-be61-4956071481ac\" (UID: \"a6921760-dfc8-4677-be61-4956071481ac\") " Sep 4 17:43:30.423142 kubelet[2647]: I0904 17:43:30.422576 2647 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-config-path\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423142 kubelet[2647]: I0904 17:43:30.422591 2647 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-xtables-lock\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423142 kubelet[2647]: I0904 17:43:30.422605 2647 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1f3db8c-5e00-4941-95f8-34a46b0462eb-clustermesh-secrets\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423142 kubelet[2647]: I0904 17:43:30.422619 2647 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cni-path\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423142 kubelet[2647]: I0904 17:43:30.422654 2647 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1f3db8c-5e00-4941-95f8-34a46b0462eb-hubble-tls\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423448 kubelet[2647]: I0904 17:43:30.422668 2647 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-etc-cni-netd\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423448 kubelet[2647]: I0904 17:43:30.422680 2647 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-lib-modules\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423448 kubelet[2647]: I0904 17:43:30.422694 2647 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v9q54\" (UniqueName: \"kubernetes.io/projected/c1f3db8c-5e00-4941-95f8-34a46b0462eb-kube-api-access-v9q54\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423448 kubelet[2647]: I0904 17:43:30.422708 2647 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-run\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423448 kubelet[2647]: I0904 17:43:30.422722 2647 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-bpf-maps\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423448 kubelet[2647]: I0904 17:43:30.422734 2647 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-hostproc\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423448 kubelet[2647]: I0904 17:43:30.422746 2647 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-host-proc-sys-net\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423683 kubelet[2647]: I0904 17:43:30.422759 2647 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-host-proc-sys-kernel\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.423683 kubelet[2647]: I0904 17:43:30.422773 2647 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1f3db8c-5e00-4941-95f8-34a46b0462eb-cilium-cgroup\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.427680 kubelet[2647]: I0904 17:43:30.427652 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6921760-dfc8-4677-be61-4956071481ac-kube-api-access-l7wpq" (OuterVolumeSpecName: "kube-api-access-l7wpq") pod "a6921760-dfc8-4677-be61-4956071481ac" (UID: "a6921760-dfc8-4677-be61-4956071481ac"). InnerVolumeSpecName "kube-api-access-l7wpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:43:30.431302 kubelet[2647]: I0904 17:43:30.431273 2647 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6921760-dfc8-4677-be61-4956071481ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a6921760-dfc8-4677-be61-4956071481ac" (UID: "a6921760-dfc8-4677-be61-4956071481ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:43:30.451422 kubelet[2647]: I0904 17:43:30.451311 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776"} err="failed to get container status \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d4bae460283e7a4b66222cd255ad6d4aca8bed0f038418f5c644a2e5adc4776\": not found" Sep 4 17:43:30.451422 kubelet[2647]: I0904 17:43:30.451430 2647 scope.go:117] "RemoveContainer" containerID="aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba" Sep 4 17:43:30.452182 containerd[1451]: time="2024-09-04T17:43:30.452021478Z" level=error msg="ContainerStatus for \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\": not found" Sep 4 17:43:30.452478 kubelet[2647]: E0904 17:43:30.452325 2647 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\": not found" containerID="aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba" Sep 4 17:43:30.452478 kubelet[2647]: I0904 17:43:30.452373 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba"} err="failed to get container status \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"aaf2cf417ed0bb3050442ab3ff707d226223b410517101762e72b1e966edc3ba\": not found" Sep 4 17:43:30.452478 kubelet[2647]: I0904 17:43:30.452390 2647 scope.go:117] "RemoveContainer" containerID="4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3" Sep 4 17:43:30.452807 containerd[1451]: time="2024-09-04T17:43:30.452755337Z" level=error msg="ContainerStatus for \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\": not found" Sep 4 17:43:30.453076 kubelet[2647]: E0904 17:43:30.453027 2647 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\": not found" containerID="4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3" Sep 4 17:43:30.453122 kubelet[2647]: I0904 17:43:30.453112 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3"} err="failed to get container status \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b2579e9ade6410db6faea0892f555d079c8ee38ccdb20f8921fe5011aad46c3\": not found" Sep 4 17:43:30.453155 kubelet[2647]: I0904 17:43:30.453140 2647 scope.go:117] "RemoveContainer" containerID="f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6" Sep 4 17:43:30.453364 containerd[1451]: time="2024-09-04T17:43:30.453322704Z" level=error msg="ContainerStatus for \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\": not found" Sep 4 17:43:30.453666 kubelet[2647]: E0904 17:43:30.453621 2647 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\": not found" containerID="f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6" Sep 4 17:43:30.453726 kubelet[2647]: I0904 17:43:30.453701 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6"} err="failed to get container status \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0be9ba441bd5ec5d48e85ad4495e2a49b530f915ec2fe0cdb26b347e0306fe6\": not found" Sep 4 17:43:30.453765 kubelet[2647]: I0904 17:43:30.453735 2647 scope.go:117] "RemoveContainer" containerID="1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f" Sep 4 17:43:30.455907 containerd[1451]: time="2024-09-04T17:43:30.455816606Z" level=error msg="ContainerStatus for \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\": not found" Sep 4 17:43:30.465464 kubelet[2647]: E0904 17:43:30.464878 2647 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\": not found" containerID="1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f" Sep 4 17:43:30.465464 kubelet[2647]: I0904 17:43:30.464965 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f"} err="failed to get container status \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f59aa3334e903cf7ab3e19742af20589d45658f0d65d56c6f37cdf0211d953f\": not found" Sep 4 17:43:30.465464 kubelet[2647]: I0904 17:43:30.464983 2647 scope.go:117] "RemoveContainer" containerID="12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4" Sep 4 17:43:30.471284 containerd[1451]: time="2024-09-04T17:43:30.471250979Z" level=info msg="RemoveContainer for \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\"" Sep 4 17:43:30.478191 containerd[1451]: time="2024-09-04T17:43:30.478158675Z" level=info msg="RemoveContainer for \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\" returns successfully" Sep 4 17:43:30.478732 kubelet[2647]: I0904 17:43:30.478537 2647 scope.go:117] "RemoveContainer" containerID="12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4" Sep 4 17:43:30.479083 containerd[1451]: time="2024-09-04T17:43:30.479054627Z" level=error msg="ContainerStatus for \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\": not found" Sep 4 17:43:30.479387 kubelet[2647]: E0904 17:43:30.479360 2647 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\": not found" containerID="12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4" Sep 4 17:43:30.479439 kubelet[2647]: I0904 17:43:30.479415 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4"} err="failed to get container status \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\": rpc error: code = NotFound desc = an error occurred when try to find container \"12e7e0127e4562cd17f3327874e173c5e0b2e5844fc2d8c095e1bde5ac972de4\": not found" Sep 4 17:43:30.523597 kubelet[2647]: I0904 17:43:30.523538 2647 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l7wpq\" (UniqueName: \"kubernetes.io/projected/a6921760-dfc8-4677-be61-4956071481ac-kube-api-access-l7wpq\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.523597 kubelet[2647]: I0904 17:43:30.523578 2647 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6921760-dfc8-4677-be61-4956071481ac-cilium-config-path\") on node \"ci-3975-2-1-d-945344e89d.novalocal\" DevicePath \"\"" Sep 4 17:43:30.634682 systemd[1]: Removed slice kubepods-burstable-podc1f3db8c_5e00_4941_95f8_34a46b0462eb.slice - libcontainer container kubepods-burstable-podc1f3db8c_5e00_4941_95f8_34a46b0462eb.slice. Sep 4 17:43:30.635045 systemd[1]: kubepods-burstable-podc1f3db8c_5e00_4941_95f8_34a46b0462eb.slice: Consumed 9.838s CPU time. Sep 4 17:43:30.676399 systemd[1]: Removed slice kubepods-besteffort-poda6921760_dfc8_4677_be61_4956071481ac.slice - libcontainer container kubepods-besteffort-poda6921760_dfc8_4677_be61_4956071481ac.slice. Sep 4 17:43:30.911241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e9b17cebc6451d7f7be8e6438c207483614deb6368b7c36482f90a0318702f9-rootfs.mount: Deactivated successfully. Sep 4 17:43:30.911496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e-rootfs.mount: Deactivated successfully. Sep 4 17:43:30.911710 systemd[1]: var-lib-kubelet-pods-a6921760\x2ddfc8\x2d4677\x2dbe61\x2d4956071481ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl7wpq.mount: Deactivated successfully. Sep 4 17:43:30.911882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af6aa40655824090231787e3a3df9e3e93365e891ba1be7b84687caef80dab7e-shm.mount: Deactivated successfully. Sep 4 17:43:30.912140 systemd[1]: var-lib-kubelet-pods-c1f3db8c\x2d5e00\x2d4941\x2d95f8\x2d34a46b0462eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv9q54.mount: Deactivated successfully. Sep 4 17:43:30.912314 systemd[1]: var-lib-kubelet-pods-c1f3db8c\x2d5e00\x2d4941\x2d95f8\x2d34a46b0462eb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 17:43:30.912463 systemd[1]: var-lib-kubelet-pods-c1f3db8c\x2d5e00\x2d4941\x2d95f8\x2d34a46b0462eb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 17:43:32.070079 sshd[4212]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:32.072131 kubelet[2647]: I0904 17:43:32.071913 2647 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a6921760-dfc8-4677-be61-4956071481ac" path="/var/lib/kubelet/pods/a6921760-dfc8-4677-be61-4956071481ac/volumes" Sep 4 17:43:32.074923 kubelet[2647]: I0904 17:43:32.074172 2647 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c1f3db8c-5e00-4941-95f8-34a46b0462eb" path="/var/lib/kubelet/pods/c1f3db8c-5e00-4941-95f8-34a46b0462eb/volumes" Sep 4 17:43:32.083985 systemd[1]: sshd@23-172.24.4.44:22-172.24.4.1:51478.service: Deactivated successfully. Sep 4 17:43:32.087279 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:43:32.090142 systemd-logind[1430]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:43:32.099134 systemd[1]: Started sshd@24-172.24.4.44:22-172.24.4.1:51482.service - OpenSSH per-connection server daemon (172.24.4.1:51482). Sep 4 17:43:32.103797 systemd-logind[1430]: Removed session 26. Sep 4 17:43:33.459924 sshd[4375]: Accepted publickey for core from 172.24.4.1 port 51482 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:33.462867 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:33.475361 systemd-logind[1430]: New session 27 of user core. Sep 4 17:43:33.480924 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:43:34.882073 kubelet[2647]: E0904 17:43:34.881986 2647 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 17:43:35.170737 kubelet[2647]: I0904 17:43:35.170585 2647 topology_manager.go:215] "Topology Admit Handler" podUID="89da23c1-697a-415c-ae2a-0e9c75910d56" podNamespace="kube-system" podName="cilium-rsn44" Sep 4 17:43:35.170870 kubelet[2647]: E0904 17:43:35.170751 2647 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1f3db8c-5e00-4941-95f8-34a46b0462eb" containerName="apply-sysctl-overwrites" Sep 4 17:43:35.170870 kubelet[2647]: E0904 17:43:35.170770 2647 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1f3db8c-5e00-4941-95f8-34a46b0462eb" containerName="mount-bpf-fs" Sep 4 17:43:35.170870 kubelet[2647]: E0904 17:43:35.170779 2647 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1f3db8c-5e00-4941-95f8-34a46b0462eb" containerName="cilium-agent" Sep 4 17:43:35.170870 kubelet[2647]: E0904 17:43:35.170788 2647 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6921760-dfc8-4677-be61-4956071481ac" containerName="cilium-operator" Sep 4 17:43:35.170870 kubelet[2647]: E0904 17:43:35.170796 2647 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1f3db8c-5e00-4941-95f8-34a46b0462eb" containerName="clean-cilium-state" Sep 4 17:43:35.170870 kubelet[2647]: E0904 17:43:35.170807 2647 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1f3db8c-5e00-4941-95f8-34a46b0462eb" containerName="mount-cgroup" Sep 4 17:43:35.170870 kubelet[2647]: I0904 17:43:35.170843 2647 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6921760-dfc8-4677-be61-4956071481ac" containerName="cilium-operator" Sep 4 17:43:35.170870 kubelet[2647]: I0904 17:43:35.170852 2647 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f3db8c-5e00-4941-95f8-34a46b0462eb" containerName="cilium-agent" Sep 4 17:43:35.196149 kubelet[2647]: I0904 17:43:35.195309 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-xtables-lock\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196149 kubelet[2647]: I0904 17:43:35.195364 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-cilium-cgroup\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196149 kubelet[2647]: I0904 17:43:35.195469 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-etc-cni-netd\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196149 kubelet[2647]: I0904 17:43:35.195541 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-hostproc\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196149 kubelet[2647]: I0904 17:43:35.195568 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-lib-modules\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196149 kubelet[2647]: I0904 17:43:35.195594 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89da23c1-697a-415c-ae2a-0e9c75910d56-hubble-tls\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196391 kubelet[2647]: I0904 17:43:35.195674 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-cilium-run\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196391 kubelet[2647]: I0904 17:43:35.195699 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89da23c1-697a-415c-ae2a-0e9c75910d56-cilium-config-path\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196391 kubelet[2647]: I0904 17:43:35.195723 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89da23c1-697a-415c-ae2a-0e9c75910d56-cilium-ipsec-secrets\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196391 kubelet[2647]: I0904 17:43:35.195764 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-host-proc-sys-net\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196391 kubelet[2647]: I0904 17:43:35.195806 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-host-proc-sys-kernel\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196548 kubelet[2647]: I0904 17:43:35.195831 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89da23c1-697a-415c-ae2a-0e9c75910d56-clustermesh-secrets\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196548 kubelet[2647]: I0904 17:43:35.195857 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnjp\" (UniqueName: \"kubernetes.io/projected/89da23c1-697a-415c-ae2a-0e9c75910d56-kube-api-access-4dnjp\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196548 kubelet[2647]: I0904 17:43:35.195881 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-bpf-maps\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.196548 kubelet[2647]: I0904 17:43:35.195904 2647 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89da23c1-697a-415c-ae2a-0e9c75910d56-cni-path\") pod \"cilium-rsn44\" (UID: \"89da23c1-697a-415c-ae2a-0e9c75910d56\") " pod="kube-system/cilium-rsn44" Sep 4 17:43:35.198261 systemd[1]: Created slice kubepods-burstable-pod89da23c1_697a_415c_ae2a_0e9c75910d56.slice - libcontainer container kubepods-burstable-pod89da23c1_697a_415c_ae2a_0e9c75910d56.slice. Sep 4 17:43:35.250797 sshd[4375]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:35.262278 systemd[1]: sshd@24-172.24.4.44:22-172.24.4.1:51482.service: Deactivated successfully. Sep 4 17:43:35.265649 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:43:35.266030 systemd[1]: session-27.scope: Consumed 1.149s CPU time. Sep 4 17:43:35.267315 systemd-logind[1430]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:43:35.275843 systemd[1]: Started sshd@25-172.24.4.44:22-172.24.4.1:53788.service - OpenSSH per-connection server daemon (172.24.4.1:53788). Sep 4 17:43:35.277654 systemd-logind[1430]: Removed session 27. Sep 4 17:43:35.511875 containerd[1451]: time="2024-09-04T17:43:35.503309804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsn44,Uid:89da23c1-697a-415c-ae2a-0e9c75910d56,Namespace:kube-system,Attempt:0,}" Sep 4 17:43:35.550602 containerd[1451]: time="2024-09-04T17:43:35.550351926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:43:35.551433 containerd[1451]: time="2024-09-04T17:43:35.551030089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:35.551433 containerd[1451]: time="2024-09-04T17:43:35.551070677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:43:35.551433 containerd[1451]: time="2024-09-04T17:43:35.551085756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:43:35.575711 systemd[1]: Started cri-containerd-9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a.scope - libcontainer container 9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a. Sep 4 17:43:35.616598 containerd[1451]: time="2024-09-04T17:43:35.614948389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsn44,Uid:89da23c1-697a-415c-ae2a-0e9c75910d56,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\"" Sep 4 17:43:35.620461 containerd[1451]: time="2024-09-04T17:43:35.620241952Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 17:43:35.638079 containerd[1451]: time="2024-09-04T17:43:35.637992932Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2596ca1bd2ee3b07eea5c39aee70aa9316be51645cb00627cf32cff618bc9cc\"" Sep 4 17:43:35.639295 containerd[1451]: time="2024-09-04T17:43:35.639252801Z" level=info msg="StartContainer for \"c2596ca1bd2ee3b07eea5c39aee70aa9316be51645cb00627cf32cff618bc9cc\"" Sep 4 17:43:35.683733 systemd[1]: Started cri-containerd-c2596ca1bd2ee3b07eea5c39aee70aa9316be51645cb00627cf32cff618bc9cc.scope - libcontainer container c2596ca1bd2ee3b07eea5c39aee70aa9316be51645cb00627cf32cff618bc9cc. Sep 4 17:43:35.720124 containerd[1451]: time="2024-09-04T17:43:35.720067847Z" level=info msg="StartContainer for \"c2596ca1bd2ee3b07eea5c39aee70aa9316be51645cb00627cf32cff618bc9cc\" returns successfully" Sep 4 17:43:35.753277 systemd[1]: cri-containerd-c2596ca1bd2ee3b07eea5c39aee70aa9316be51645cb00627cf32cff618bc9cc.scope: Deactivated successfully. Sep 4 17:43:35.816594 containerd[1451]: time="2024-09-04T17:43:35.816419721Z" level=info msg="shim disconnected" id=c2596ca1bd2ee3b07eea5c39aee70aa9316be51645cb00627cf32cff618bc9cc namespace=k8s.io Sep 4 17:43:35.816594 containerd[1451]: time="2024-09-04T17:43:35.816477413Z" level=warning msg="cleaning up after shim disconnected" id=c2596ca1bd2ee3b07eea5c39aee70aa9316be51645cb00627cf32cff618bc9cc namespace=k8s.io Sep 4 17:43:35.816594 containerd[1451]: time="2024-09-04T17:43:35.816489145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:43:35.830905 containerd[1451]: time="2024-09-04T17:43:35.830853872Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:43:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:43:36.415461 containerd[1451]: time="2024-09-04T17:43:36.415329925Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 17:43:36.442086 containerd[1451]: time="2024-09-04T17:43:36.441959154Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e\"" Sep 4 17:43:36.444485 containerd[1451]: time="2024-09-04T17:43:36.443508122Z" level=info msg="StartContainer for \"19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e\"" Sep 4 17:43:36.504921 systemd[1]: run-containerd-runc-k8s.io-19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e-runc.FFmokj.mount: Deactivated successfully. Sep 4 17:43:36.512666 systemd[1]: Started cri-containerd-19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e.scope - libcontainer container 19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e. Sep 4 17:43:36.545187 containerd[1451]: time="2024-09-04T17:43:36.545121716Z" level=info msg="StartContainer for \"19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e\" returns successfully" Sep 4 17:43:36.554467 sshd[4387]: Accepted publickey for core from 172.24.4.1 port 53788 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:36.554427 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:36.562502 systemd-logind[1430]: New session 28 of user core. Sep 4 17:43:36.564725 systemd[1]: cri-containerd-19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e.scope: Deactivated successfully. Sep 4 17:43:36.571420 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:43:36.604180 containerd[1451]: time="2024-09-04T17:43:36.604090184Z" level=info msg="shim disconnected" id=19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e namespace=k8s.io Sep 4 17:43:36.604180 containerd[1451]: time="2024-09-04T17:43:36.604170099Z" level=warning msg="cleaning up after shim disconnected" id=19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e namespace=k8s.io Sep 4 17:43:36.604180 containerd[1451]: time="2024-09-04T17:43:36.604184266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:43:37.303661 sshd[4387]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:37.319471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19ef81c3ec733b32d6fa63643799f1a6fecb622a098ba2aa180d7c1bf106650e-rootfs.mount: Deactivated successfully. Sep 4 17:43:37.322622 systemd[1]: sshd@25-172.24.4.44:22-172.24.4.1:53788.service: Deactivated successfully. Sep 4 17:43:37.327063 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:43:37.329985 systemd-logind[1430]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:43:37.341170 systemd[1]: Started sshd@26-172.24.4.44:22-172.24.4.1:53794.service - OpenSSH per-connection server daemon (172.24.4.1:53794). Sep 4 17:43:37.343824 systemd-logind[1430]: Removed session 28. Sep 4 17:43:37.426080 containerd[1451]: time="2024-09-04T17:43:37.425893709Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 17:43:37.489906 containerd[1451]: time="2024-09-04T17:43:37.489849198Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16\"" Sep 4 17:43:37.491275 containerd[1451]: time="2024-09-04T17:43:37.491245761Z" level=info msg="StartContainer for \"bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16\"" Sep 4 17:43:37.491625 kubelet[2647]: I0904 17:43:37.491601 2647 setters.go:568] "Node became not ready" node="ci-3975-2-1-d-945344e89d.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-04T17:43:37Z","lastTransitionTime":"2024-09-04T17:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 17:43:37.555686 systemd[1]: Started cri-containerd-bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16.scope - libcontainer container bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16. Sep 4 17:43:37.599452 containerd[1451]: time="2024-09-04T17:43:37.599337733Z" level=info msg="StartContainer for \"bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16\" returns successfully" Sep 4 17:43:37.607401 systemd[1]: cri-containerd-bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16.scope: Deactivated successfully. Sep 4 17:43:37.642754 containerd[1451]: time="2024-09-04T17:43:37.642650613Z" level=info msg="shim disconnected" id=bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16 namespace=k8s.io Sep 4 17:43:37.642754 containerd[1451]: time="2024-09-04T17:43:37.642736069Z" level=warning msg="cleaning up after shim disconnected" id=bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16 namespace=k8s.io Sep 4 17:43:37.642754 containerd[1451]: time="2024-09-04T17:43:37.642749475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:43:38.319948 systemd[1]: run-containerd-runc-k8s.io-bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16-runc.gsGft5.mount: Deactivated successfully. Sep 4 17:43:38.320606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bacbcca85414ab90495545f886a36d00b8403c877d4fad15d45e7db9d7658f16-rootfs.mount: Deactivated successfully. Sep 4 17:43:38.445119 containerd[1451]: time="2024-09-04T17:43:38.444712984Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 17:43:38.631745 containerd[1451]: time="2024-09-04T17:43:38.630086329Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039\"" Sep 4 17:43:38.636389 containerd[1451]: time="2024-09-04T17:43:38.636267765Z" level=info msg="StartContainer for \"80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039\"" Sep 4 17:43:38.638024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947207730.mount: Deactivated successfully. Sep 4 17:43:38.682180 sshd[4563]: Accepted publickey for core from 172.24.4.1 port 53794 ssh2: RSA SHA256:SturRzFslRD/T8wREGvsPcKnS9Jm32+wyVbRetuFUDw Sep 4 17:43:38.683942 sshd[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:43:38.695759 systemd-logind[1430]: New session 29 of user core. Sep 4 17:43:38.702416 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 17:43:38.719724 systemd[1]: Started cri-containerd-80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039.scope - libcontainer container 80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039. Sep 4 17:43:38.756340 systemd[1]: cri-containerd-80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039.scope: Deactivated successfully. Sep 4 17:43:38.761008 containerd[1451]: time="2024-09-04T17:43:38.760911090Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89da23c1_697a_415c_ae2a_0e9c75910d56.slice/cri-containerd-80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039.scope/memory.events\": no such file or directory" Sep 4 17:43:38.763583 containerd[1451]: time="2024-09-04T17:43:38.763422862Z" level=info msg="StartContainer for \"80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039\" returns successfully" Sep 4 17:43:38.796249 containerd[1451]: time="2024-09-04T17:43:38.796168766Z" level=info msg="shim disconnected" id=80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039 namespace=k8s.io Sep 4 17:43:38.796249 containerd[1451]: time="2024-09-04T17:43:38.796239825Z" level=warning msg="cleaning up after shim disconnected" id=80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039 namespace=k8s.io Sep 4 17:43:38.796249 containerd[1451]: time="2024-09-04T17:43:38.796251937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:43:39.317479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80139cf0bab07eee70925462d6213158cf94ac42bea58679757d9ce72f1ba039-rootfs.mount: Deactivated successfully. Sep 4 17:43:39.438937 containerd[1451]: time="2024-09-04T17:43:39.438559431Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 17:43:39.463240 containerd[1451]: time="2024-09-04T17:43:39.463136197Z" level=info msg="CreateContainer within sandbox \"9e458b5903a52f7f3892f971d01f4d55138fbd29011a2d94126fe9fab90bf99a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"61b6e3525496573497aa18712ef8d78bf83382d16a2590fde0d8eccf4989fe8f\"" Sep 4 17:43:39.466582 containerd[1451]: time="2024-09-04T17:43:39.464910803Z" level=info msg="StartContainer for \"61b6e3525496573497aa18712ef8d78bf83382d16a2590fde0d8eccf4989fe8f\"" Sep 4 17:43:39.504707 systemd[1]: Started cri-containerd-61b6e3525496573497aa18712ef8d78bf83382d16a2590fde0d8eccf4989fe8f.scope - libcontainer container 61b6e3525496573497aa18712ef8d78bf83382d16a2590fde0d8eccf4989fe8f. Sep 4 17:43:39.546294 containerd[1451]: time="2024-09-04T17:43:39.543681680Z" level=info msg="StartContainer for \"61b6e3525496573497aa18712ef8d78bf83382d16a2590fde0d8eccf4989fe8f\" returns successfully" Sep 4 17:43:40.387576 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:43:40.455584 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Sep 4 17:43:43.974226 systemd-networkd[1367]: lxc_health: Link UP Sep 4 17:43:43.977192 systemd-networkd[1367]: lxc_health: Gained carrier Sep 4 17:43:44.291136 systemd[1]: run-containerd-runc-k8s.io-61b6e3525496573497aa18712ef8d78bf83382d16a2590fde0d8eccf4989fe8f-runc.HOV4oi.mount: Deactivated successfully. Sep 4 17:43:45.589059 systemd-networkd[1367]: lxc_health: Gained IPv6LL Sep 4 17:43:45.602124 kubelet[2647]: I0904 17:43:45.602053 2647 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rsn44" podStartSLOduration=10.601999746 podStartE2EDuration="10.601999746s" podCreationTimestamp="2024-09-04 17:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:43:40.475213085 +0000 UTC m=+156.747795147" watchObservedRunningTime="2024-09-04 17:43:45.601999746 +0000 UTC m=+161.874581798" Sep 4 17:43:48.815385 systemd[1]: run-containerd-runc-k8s.io-61b6e3525496573497aa18712ef8d78bf83382d16a2590fde0d8eccf4989fe8f-runc.QI8rDz.mount: Deactivated successfully. Sep 4 17:43:51.393934 sshd[4563]: pam_unix(sshd:session): session closed for user core Sep 4 17:43:51.400081 systemd[1]: sshd@26-172.24.4.44:22-172.24.4.1:53794.service: Deactivated successfully. Sep 4 17:43:51.405338 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 17:43:51.409667 systemd-logind[1430]: Session 29 logged out. Waiting for processes to exit. Sep 4 17:43:51.413096 systemd-logind[1430]: Removed session 29.