Jun 25 18:34:44.055643 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:34:44.055669 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:34:44.055682 kernel: BIOS-provided physical RAM map: Jun 25 18:34:44.055690 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:34:44.055697 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:34:44.055704 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:34:44.055772 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jun 25 18:34:44.055781 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jun 25 18:34:44.055789 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 18:34:44.055801 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:34:44.055809 kernel: NX (Execute Disable) protection: active Jun 25 18:34:44.055816 kernel: APIC: Static calls initialized Jun 25 18:34:44.055824 kernel: SMBIOS 2.8 present. Jun 25 18:34:44.055833 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jun 25 18:34:44.055842 kernel: Hypervisor detected: KVM Jun 25 18:34:44.055853 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:34:44.055861 kernel: kvm-clock: using sched offset of 5414052162 cycles Jun 25 18:34:44.055869 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:34:44.055878 kernel: tsc: Detected 1996.249 MHz processor Jun 25 18:34:44.055887 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:34:44.055896 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:34:44.055905 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jun 25 18:34:44.055914 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:34:44.055923 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:34:44.055934 kernel: ACPI: Early table checksum verification disabled Jun 25 18:34:44.055942 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jun 25 18:34:44.055951 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:34:44.055960 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:34:44.055968 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:34:44.055977 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 25 18:34:44.055985 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:34:44.055994 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:34:44.056003 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jun 25 18:34:44.056013 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jun 25 18:34:44.056022 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 25 18:34:44.056030 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jun 25 18:34:44.056039 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jun 25 18:34:44.056047 kernel: No NUMA configuration found Jun 25 18:34:44.056056 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jun 25 18:34:44.056065 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jun 25 18:34:44.056077 kernel: Zone ranges: Jun 25 18:34:44.056087 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:34:44.056096 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jun 25 18:34:44.056105 kernel: Normal empty Jun 25 18:34:44.056114 kernel: Movable zone start for each node Jun 25 18:34:44.056123 kernel: Early memory node ranges Jun 25 18:34:44.056132 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:34:44.056143 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jun 25 18:34:44.056152 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jun 25 18:34:44.056161 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:34:44.056170 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:34:44.056180 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jun 25 18:34:44.056188 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 18:34:44.056198 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:34:44.056207 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:34:44.056216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 18:34:44.056225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:34:44.056236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:34:44.056245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:34:44.056254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:34:44.056263 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:34:44.056272 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:34:44.056282 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:34:44.056291 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 25 18:34:44.056300 kernel: Booting paravirtualized kernel on KVM Jun 25 18:34:44.056309 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:34:44.056320 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:34:44.056330 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:34:44.056339 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:34:44.056348 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:34:44.056357 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 25 18:34:44.056368 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:34:44.056378 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:34:44.056387 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:34:44.056398 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:34:44.056407 kernel: Fallback order for Node 0: 0 Jun 25 18:34:44.056416 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jun 25 18:34:44.056425 kernel: Policy zone: DMA32 Jun 25 18:34:44.056434 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:34:44.056444 kernel: Memory: 1965068K/2096620K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131292K reserved, 0K cma-reserved) Jun 25 18:34:44.056453 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:34:44.056462 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:34:44.056474 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:34:44.056483 kernel: Dynamic Preempt: voluntary Jun 25 18:34:44.056492 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:34:44.056501 kernel: rcu: RCU event tracing is enabled. Jun 25 18:34:44.056511 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:34:44.056520 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:34:44.056529 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:34:44.056538 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:34:44.056547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:34:44.056556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:34:44.056568 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 18:34:44.056577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:34:44.056586 kernel: Console: colour VGA+ 80x25 Jun 25 18:34:44.056595 kernel: printk: console [tty0] enabled Jun 25 18:34:44.056604 kernel: printk: console [ttyS0] enabled Jun 25 18:34:44.056613 kernel: ACPI: Core revision 20230628 Jun 25 18:34:44.056622 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:34:44.056631 kernel: x2apic enabled Jun 25 18:34:44.056640 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:34:44.056651 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 18:34:44.056660 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 18:34:44.056669 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 25 18:34:44.056678 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 25 18:34:44.056687 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 25 18:34:44.056696 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:34:44.056705 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:34:44.057570 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:34:44.057582 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:34:44.057595 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:34:44.057603 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 25 18:34:44.057612 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:34:44.057621 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:34:44.057629 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:34:44.057638 kernel: SELinux: Initializing. Jun 25 18:34:44.057646 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:34:44.057655 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:34:44.057673 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 25 18:34:44.057683 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:34:44.057692 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:34:44.057703 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:34:44.058344 kernel: Performance Events: AMD PMU driver. Jun 25 18:34:44.058356 kernel: ... version: 0 Jun 25 18:34:44.058366 kernel: ... bit width: 48 Jun 25 18:34:44.058375 kernel: ... generic registers: 4 Jun 25 18:34:44.058388 kernel: ... value mask: 0000ffffffffffff Jun 25 18:34:44.058397 kernel: ... max period: 00007fffffffffff Jun 25 18:34:44.058406 kernel: ... fixed-purpose events: 0 Jun 25 18:34:44.058415 kernel: ... event mask: 000000000000000f Jun 25 18:34:44.058424 kernel: signal: max sigframe size: 1440 Jun 25 18:34:44.058434 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:34:44.058443 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:34:44.058453 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:34:44.058462 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:34:44.058472 kernel: .... node #0, CPUs: #1 Jun 25 18:34:44.058483 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:34:44.058492 kernel: smpboot: Max logical packages: 2 Jun 25 18:34:44.058501 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 25 18:34:44.058510 kernel: devtmpfs: initialized Jun 25 18:34:44.058519 kernel: x86/mm: Memory block size: 128MB Jun 25 18:34:44.058529 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:34:44.058538 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:34:44.058548 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:34:44.058557 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:34:44.058569 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:34:44.058578 kernel: audit: type=2000 audit(1719340482.538:1): state=initialized audit_enabled=0 res=1 Jun 25 18:34:44.058587 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:34:44.058596 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:34:44.058606 kernel: cpuidle: using governor menu Jun 25 18:34:44.058615 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:34:44.058624 kernel: dca service started, version 1.12.1 Jun 25 18:34:44.058633 kernel: PCI: Using configuration type 1 for base access Jun 25 18:34:44.058643 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:34:44.058654 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:34:44.058664 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:34:44.058673 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:34:44.058682 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:34:44.058691 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:34:44.058700 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:34:44.058731 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:34:44.058743 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:34:44.058752 kernel: ACPI: Interpreter enabled Jun 25 18:34:44.058765 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 18:34:44.058774 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:34:44.058784 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:34:44.058794 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:34:44.058803 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 18:34:44.058813 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:34:44.058959 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:34:44.059058 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 18:34:44.059156 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 25 18:34:44.059172 kernel: acpiphp: Slot [3] registered Jun 25 18:34:44.059184 kernel: acpiphp: Slot [4] registered Jun 25 18:34:44.059193 kernel: acpiphp: Slot [5] registered Jun 25 18:34:44.059202 kernel: acpiphp: Slot [6] registered Jun 25 18:34:44.059212 kernel: acpiphp: Slot [7] registered Jun 25 18:34:44.059221 kernel: acpiphp: Slot [8] registered Jun 25 18:34:44.059230 kernel: acpiphp: Slot [9] registered Jun 25 18:34:44.059242 kernel: acpiphp: Slot [10] registered Jun 25 18:34:44.059252 kernel: acpiphp: Slot [11] registered Jun 25 18:34:44.059261 kernel: acpiphp: Slot [12] registered Jun 25 18:34:44.059270 kernel: acpiphp: Slot [13] registered Jun 25 18:34:44.059280 kernel: acpiphp: Slot [14] registered Jun 25 18:34:44.059289 kernel: acpiphp: Slot [15] registered Jun 25 18:34:44.059298 kernel: acpiphp: Slot [16] registered Jun 25 18:34:44.059307 kernel: acpiphp: Slot [17] registered Jun 25 18:34:44.059316 kernel: acpiphp: Slot [18] registered Jun 25 18:34:44.059325 kernel: acpiphp: Slot [19] registered Jun 25 18:34:44.059336 kernel: acpiphp: Slot [20] registered Jun 25 18:34:44.059346 kernel: acpiphp: Slot [21] registered Jun 25 18:34:44.059355 kernel: acpiphp: Slot [22] registered Jun 25 18:34:44.059364 kernel: acpiphp: Slot [23] registered Jun 25 18:34:44.059373 kernel: acpiphp: Slot [24] registered Jun 25 18:34:44.059383 kernel: acpiphp: Slot [25] registered Jun 25 18:34:44.059392 kernel: acpiphp: Slot [26] registered Jun 25 18:34:44.059401 kernel: acpiphp: Slot [27] registered Jun 25 18:34:44.059410 kernel: acpiphp: Slot [28] registered Jun 25 18:34:44.059421 kernel: acpiphp: Slot [29] registered Jun 25 18:34:44.059430 kernel: acpiphp: Slot [30] registered Jun 25 18:34:44.059440 kernel: acpiphp: Slot [31] registered Jun 25 18:34:44.059462 kernel: PCI host bridge to bus 0000:00 Jun 25 18:34:44.059560 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:34:44.059645 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:34:44.059766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:34:44.059856 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 18:34:44.059945 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 18:34:44.060026 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:34:44.060153 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:34:44.060257 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:34:44.060361 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 18:34:44.060456 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jun 25 18:34:44.060554 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 18:34:44.060648 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 18:34:44.063815 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 18:34:44.063927 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 18:34:44.064029 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 18:34:44.064126 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 18:34:44.064218 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 18:34:44.064325 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 25 18:34:44.064417 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 25 18:34:44.064512 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 25 18:34:44.064605 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jun 25 18:34:44.064697 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jun 25 18:34:44.065909 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:34:44.066021 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 25 18:34:44.066119 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jun 25 18:34:44.066210 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jun 25 18:34:44.066301 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 25 18:34:44.066391 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jun 25 18:34:44.066490 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 18:34:44.066583 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 18:34:44.066679 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jun 25 18:34:44.069513 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 25 18:34:44.069619 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jun 25 18:34:44.069735 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jun 25 18:34:44.069834 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 25 18:34:44.069932 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:34:44.070023 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jun 25 18:34:44.070126 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 25 18:34:44.070140 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:34:44.070151 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:34:44.070161 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:34:44.070171 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:34:44.070181 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:34:44.070190 kernel: iommu: Default domain type: Translated Jun 25 18:34:44.070200 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:34:44.070210 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:34:44.070223 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:34:44.070233 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:34:44.070242 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jun 25 18:34:44.070331 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 18:34:44.070423 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 18:34:44.070513 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:34:44.070528 kernel: vgaarb: loaded Jun 25 18:34:44.070538 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:34:44.070548 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:34:44.070561 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:34:44.070571 kernel: pnp: PnP ACPI init Jun 25 18:34:44.070664 kernel: pnp 00:03: [dma 2] Jun 25 18:34:44.070679 kernel: pnp: PnP ACPI: found 5 devices Jun 25 18:34:44.070689 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:34:44.070699 kernel: NET: Registered PF_INET protocol family Jun 25 18:34:44.070756 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:34:44.070780 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 18:34:44.070794 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:34:44.070804 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:34:44.070814 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 18:34:44.070824 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 18:34:44.070834 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:34:44.070843 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:34:44.070853 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:34:44.070862 kernel: NET: Registered PF_XDP protocol family Jun 25 18:34:44.070953 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:34:44.071038 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:34:44.071117 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:34:44.071197 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 18:34:44.071276 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 18:34:44.071370 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 18:34:44.071483 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:34:44.071498 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:34:44.071508 kernel: Initialise system trusted keyrings Jun 25 18:34:44.071522 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 18:34:44.071531 kernel: Key type asymmetric registered Jun 25 18:34:44.071541 kernel: Asymmetric key parser 'x509' registered Jun 25 18:34:44.071550 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:34:44.071560 kernel: io scheduler mq-deadline registered Jun 25 18:34:44.071569 kernel: io scheduler kyber registered Jun 25 18:34:44.071578 kernel: io scheduler bfq registered Jun 25 18:34:44.071588 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:34:44.071598 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 25 18:34:44.071610 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 18:34:44.071619 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 18:34:44.071629 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 18:34:44.071638 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:34:44.071648 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:34:44.071657 kernel: random: crng init done Jun 25 18:34:44.071667 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:34:44.071676 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:34:44.071686 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:34:44.072850 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 25 18:34:44.072871 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:34:44.072954 kernel: rtc_cmos 00:04: registered as rtc0 Jun 25 18:34:44.073039 kernel: rtc_cmos 00:04: setting system clock to 2024-06-25T18:34:43 UTC (1719340483) Jun 25 18:34:44.073121 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 25 18:34:44.073137 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 18:34:44.073147 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:34:44.073157 kernel: Segment Routing with IPv6 Jun 25 18:34:44.073171 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:34:44.073180 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:34:44.073190 kernel: Key type dns_resolver registered Jun 25 18:34:44.073200 kernel: IPI shorthand broadcast: enabled Jun 25 18:34:44.073209 kernel: sched_clock: Marking stable (946008498, 126130185)->(1076565048, -4426365) Jun 25 18:34:44.073218 kernel: registered taskstats version 1 Jun 25 18:34:44.073228 kernel: Loading compiled-in X.509 certificates Jun 25 18:34:44.073238 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:34:44.073247 kernel: Key type .fscrypt registered Jun 25 18:34:44.073258 kernel: Key type fscrypt-provisioning registered Jun 25 18:34:44.073268 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:34:44.073277 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:34:44.073287 kernel: ima: No architecture policies found Jun 25 18:34:44.073296 kernel: clk: Disabling unused clocks Jun 25 18:34:44.073305 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:34:44.073314 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:34:44.073324 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:34:44.073335 kernel: Run /init as init process Jun 25 18:34:44.073345 kernel: with arguments: Jun 25 18:34:44.073354 kernel: /init Jun 25 18:34:44.073363 kernel: with environment: Jun 25 18:34:44.073372 kernel: HOME=/ Jun 25 18:34:44.073381 kernel: TERM=linux Jun 25 18:34:44.073391 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:34:44.073403 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:34:44.073418 systemd[1]: Detected virtualization kvm. Jun 25 18:34:44.073429 systemd[1]: Detected architecture x86-64. Jun 25 18:34:44.073439 systemd[1]: Running in initrd. Jun 25 18:34:44.073449 systemd[1]: No hostname configured, using default hostname. Jun 25 18:34:44.073459 systemd[1]: Hostname set to . Jun 25 18:34:44.073470 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:34:44.073480 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:34:44.073490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:34:44.073503 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:34:44.073514 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:34:44.073524 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:34:44.073535 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:34:44.073545 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:34:44.073557 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:34:44.073567 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:34:44.073580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:34:44.073590 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:34:44.073600 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:34:44.073610 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:34:44.073630 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:34:44.073642 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:34:44.073655 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:34:44.073665 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:34:44.073675 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:34:44.073686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:34:44.073696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:34:44.073706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:34:44.074761 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:34:44.074774 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:34:44.074786 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:34:44.074801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:34:44.074811 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:34:44.074822 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:34:44.074833 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:34:44.074843 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:34:44.074854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:34:44.074865 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:34:44.074876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:34:44.074888 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:34:44.074920 systemd-journald[184]: Collecting audit messages is disabled. Jun 25 18:34:44.074952 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:34:44.074968 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:34:44.074982 systemd-journald[184]: Journal started Jun 25 18:34:44.075005 systemd-journald[184]: Runtime Journal (/run/log/journal/7791bedb3e844684b2db2d1992901620) is 4.9M, max 39.3M, 34.4M free. Jun 25 18:34:44.082555 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:34:44.085289 systemd-modules-load[186]: Inserted module 'overlay' Jun 25 18:34:44.122132 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:34:44.122159 kernel: Bridge firewalling registered Jun 25 18:34:44.121604 systemd-modules-load[186]: Inserted module 'br_netfilter' Jun 25 18:34:44.128730 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:34:44.127487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:34:44.128139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:34:44.135896 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:34:44.137728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:34:44.141843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:34:44.144094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:34:44.155753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:34:44.158667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:34:44.160641 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:34:44.164965 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:34:44.175845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:34:44.182039 dracut-cmdline[220]: dracut-dracut-053 Jun 25 18:34:44.184165 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:34:44.216417 systemd-resolved[222]: Positive Trust Anchors: Jun 25 18:34:44.216436 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:34:44.216482 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:34:44.223968 systemd-resolved[222]: Defaulting to hostname 'linux'. Jun 25 18:34:44.225013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:34:44.228059 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:34:44.260782 kernel: SCSI subsystem initialized Jun 25 18:34:44.272771 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:34:44.286769 kernel: iscsi: registered transport (tcp) Jun 25 18:34:44.314961 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:34:44.315035 kernel: QLogic iSCSI HBA Driver Jun 25 18:34:44.375661 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:34:44.381892 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:34:44.441107 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:34:44.441411 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:34:44.441447 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:34:44.493847 kernel: raid6: sse2x4 gen() 13016 MB/s Jun 25 18:34:44.510811 kernel: raid6: sse2x2 gen() 14731 MB/s Jun 25 18:34:44.527931 kernel: raid6: sse2x1 gen() 9974 MB/s Jun 25 18:34:44.527994 kernel: raid6: using algorithm sse2x2 gen() 14731 MB/s Jun 25 18:34:44.546017 kernel: raid6: .... xor() 9377 MB/s, rmw enabled Jun 25 18:34:44.546104 kernel: raid6: using ssse3x2 recovery algorithm Jun 25 18:34:44.573781 kernel: xor: measuring software checksum speed Jun 25 18:34:44.573870 kernel: prefetch64-sse : 18327 MB/sec Jun 25 18:34:44.577035 kernel: generic_sse : 16934 MB/sec Jun 25 18:34:44.577097 kernel: xor: using function: prefetch64-sse (18327 MB/sec) Jun 25 18:34:44.783790 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:34:44.801410 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:34:44.807867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:34:44.862341 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jun 25 18:34:44.873388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:34:44.887107 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:34:44.922398 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jun 25 18:34:44.969293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:34:44.979005 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:34:45.040782 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:34:45.049872 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:34:45.073119 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:34:45.074622 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:34:45.077673 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:34:45.079396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:34:45.085862 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:34:45.114776 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:34:45.124748 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 25 18:34:45.244534 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jun 25 18:34:45.244671 kernel: libata version 3.00 loaded. Jun 25 18:34:45.244692 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 18:34:45.244839 kernel: scsi host0: ata_piix Jun 25 18:34:45.244961 kernel: scsi host1: ata_piix Jun 25 18:34:45.245072 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jun 25 18:34:45.245096 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jun 25 18:34:45.245110 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:34:45.245123 kernel: GPT:17805311 != 41943039 Jun 25 18:34:45.245134 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:34:45.245145 kernel: GPT:17805311 != 41943039 Jun 25 18:34:45.245156 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:34:45.245167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:34:45.153186 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:34:45.153361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:34:45.154130 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:34:45.154666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:34:45.154816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:34:45.155325 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:34:45.166559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:34:45.235559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:34:45.242453 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:34:45.262329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:34:45.372136 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (453) Jun 25 18:34:45.380768 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Jun 25 18:34:45.401812 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:34:45.430686 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:34:45.432218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:34:45.438458 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:34:45.445881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:34:45.454842 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:34:45.466778 disk-uuid[510]: Primary Header is updated. Jun 25 18:34:45.466778 disk-uuid[510]: Secondary Entries is updated. Jun 25 18:34:45.466778 disk-uuid[510]: Secondary Header is updated. Jun 25 18:34:45.475745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:34:45.481338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:34:46.585821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:34:46.586928 disk-uuid[511]: The operation has completed successfully. Jun 25 18:34:46.661411 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:34:46.663222 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:34:46.715968 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:34:46.722976 sh[524]: Success Jun 25 18:34:46.756742 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jun 25 18:34:46.817917 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:34:46.828314 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:34:46.831389 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:34:46.855766 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:34:46.855855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:34:46.864551 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:34:46.867253 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:34:46.869343 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:34:46.884917 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:34:46.887105 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:34:46.894947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:34:46.899952 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:34:46.918780 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:34:46.918839 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:34:46.921786 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:34:46.931777 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:34:46.951395 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:34:46.955677 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:34:46.969806 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:34:46.975900 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:34:47.018386 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:34:47.022854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:34:47.047932 systemd-networkd[706]: lo: Link UP Jun 25 18:34:47.047944 systemd-networkd[706]: lo: Gained carrier Jun 25 18:34:47.049223 systemd-networkd[706]: Enumeration completed Jun 25 18:34:47.049741 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:34:47.049845 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:34:47.049849 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:34:47.051203 systemd-networkd[706]: eth0: Link UP Jun 25 18:34:47.051207 systemd-networkd[706]: eth0: Gained carrier Jun 25 18:34:47.051215 systemd-networkd[706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:34:47.052112 systemd[1]: Reached target network.target - Network. Jun 25 18:34:47.076775 systemd-networkd[706]: eth0: DHCPv4 address 172.24.4.45/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 25 18:34:47.252510 systemd-resolved[222]: Detected conflict on linux IN A 172.24.4.45 Jun 25 18:34:47.252546 systemd-resolved[222]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jun 25 18:34:47.616850 ignition[631]: Ignition 2.19.0 Jun 25 18:34:47.616866 ignition[631]: Stage: fetch-offline Jun 25 18:34:47.616902 ignition[631]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:34:47.616912 ignition[631]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:34:47.617017 ignition[631]: parsed url from cmdline: "" Jun 25 18:34:47.621221 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:34:47.617021 ignition[631]: no config URL provided Jun 25 18:34:47.617027 ignition[631]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:34:47.617036 ignition[631]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:34:47.617040 ignition[631]: failed to fetch config: resource requires networking Jun 25 18:34:47.617481 ignition[631]: Ignition finished successfully Jun 25 18:34:47.630875 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:34:47.664234 ignition[717]: Ignition 2.19.0 Jun 25 18:34:47.664261 ignition[717]: Stage: fetch Jun 25 18:34:47.664676 ignition[717]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:34:47.664703 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:34:47.664954 ignition[717]: parsed url from cmdline: "" Jun 25 18:34:47.664964 ignition[717]: no config URL provided Jun 25 18:34:47.664977 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:34:47.664997 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:34:47.665184 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 25 18:34:47.665212 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 25 18:34:47.665224 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 25 18:34:47.829486 ignition[717]: GET result: OK Jun 25 18:34:47.829662 ignition[717]: parsing config with SHA512: ea6944bb3b75cf59372651fc940d977cb81116056a4aa86fd1d2439693e60648c17ecd88c955040144fb2cd42c1de25e7c8ac09d25dcf9cff9f52e68709d9fdf Jun 25 18:34:47.843993 unknown[717]: fetched base config from "system" Jun 25 18:34:47.845312 unknown[717]: fetched base config from "system" Jun 25 18:34:47.845380 unknown[717]: fetched user config from "openstack" Jun 25 18:34:47.847359 ignition[717]: fetch: fetch complete Jun 25 18:34:47.847373 ignition[717]: fetch: fetch passed Jun 25 18:34:47.847536 ignition[717]: Ignition finished successfully Jun 25 18:34:47.850651 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:34:47.859972 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:34:47.903326 ignition[724]: Ignition 2.19.0 Jun 25 18:34:47.903357 ignition[724]: Stage: kargs Jun 25 18:34:47.903850 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:34:47.903876 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:34:47.906345 ignition[724]: kargs: kargs passed Jun 25 18:34:47.908567 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:34:47.906445 ignition[724]: Ignition finished successfully Jun 25 18:34:47.921140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:34:47.951345 ignition[731]: Ignition 2.19.0 Jun 25 18:34:47.951364 ignition[731]: Stage: disks Jun 25 18:34:47.951941 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:34:47.951970 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:34:47.956634 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:34:47.954519 ignition[731]: disks: disks passed Jun 25 18:34:47.960379 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:34:47.954616 ignition[731]: Ignition finished successfully Jun 25 18:34:47.962297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:34:47.964674 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:34:47.967485 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:34:47.969806 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:34:47.980086 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:34:48.028777 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 18:34:48.041306 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:34:48.051933 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:34:48.229883 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:34:48.230336 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:34:48.231331 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:34:48.237874 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:34:48.241836 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:34:48.243053 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:34:48.245928 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 25 18:34:48.247566 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:34:48.248820 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:34:48.258758 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (748) Jun 25 18:34:48.264776 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:34:48.269384 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:34:48.269406 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:34:48.269419 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:34:48.278004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:34:48.282766 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:34:48.288365 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:34:48.339153 systemd-networkd[706]: eth0: Gained IPv6LL Jun 25 18:34:48.391966 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:34:48.401095 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:34:48.407753 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:34:48.412635 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:34:48.496967 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:34:48.505798 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:34:48.508597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:34:48.515393 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:34:48.517203 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:34:48.538126 ignition[867]: INFO : Ignition 2.19.0 Jun 25 18:34:48.538126 ignition[867]: INFO : Stage: mount Jun 25 18:34:48.539420 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:34:48.539420 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:34:48.540845 ignition[867]: INFO : mount: mount passed Jun 25 18:34:48.540845 ignition[867]: INFO : Ignition finished successfully Jun 25 18:34:48.544032 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:34:48.547674 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:34:55.471528 coreos-metadata[750]: Jun 25 18:34:55.471 WARN failed to locate config-drive, using the metadata service API instead Jun 25 18:34:55.511001 coreos-metadata[750]: Jun 25 18:34:55.510 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 18:34:55.525901 coreos-metadata[750]: Jun 25 18:34:55.525 INFO Fetch successful Jun 25 18:34:55.527345 coreos-metadata[750]: Jun 25 18:34:55.527 INFO wrote hostname ci-4012-0-0-f-7092d20389.novalocal to /sysroot/etc/hostname Jun 25 18:34:55.530918 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 25 18:34:55.531193 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 25 18:34:55.546186 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:34:55.578049 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:34:55.612850 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (885) Jun 25 18:34:55.677332 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:34:55.677421 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:34:55.680288 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:34:55.839839 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:34:55.848562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:34:55.894130 ignition[903]: INFO : Ignition 2.19.0 Jun 25 18:34:55.894130 ignition[903]: INFO : Stage: files Jun 25 18:34:55.896945 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:34:55.896945 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:34:55.900426 ignition[903]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:34:55.900426 ignition[903]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:34:55.900426 ignition[903]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:34:55.906416 ignition[903]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:34:55.906416 ignition[903]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:34:55.911592 ignition[903]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:34:55.907949 unknown[903]: wrote ssh authorized keys file for user: core Jun 25 18:34:55.915614 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:34:55.915614 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:34:55.915614 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:34:55.915614 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:34:56.587962 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:34:56.913945 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:34:56.913945 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:34:56.918675 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 25 18:34:57.578042 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jun 25 18:34:58.334110 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:34:58.334110 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:34:58.340294 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:34:58.898395 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jun 25 18:35:00.654699 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:35:00.654699 ignition[903]: INFO : files: op(d): [started] processing unit "containerd.service" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(d): [finished] processing unit "containerd.service" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:35:00.658834 ignition[903]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:35:00.658834 ignition[903]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:35:00.658834 ignition[903]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:35:00.658834 ignition[903]: INFO : files: files passed Jun 25 18:35:00.658834 ignition[903]: INFO : Ignition finished successfully Jun 25 18:35:00.660661 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:35:00.672175 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:35:00.676893 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:35:00.692418 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:35:00.694169 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:35:00.702528 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:35:00.702528 initrd-setup-root-after-ignition[932]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:35:00.707419 initrd-setup-root-after-ignition[936]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:35:00.707519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:35:00.709083 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:35:00.736064 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:35:00.783914 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:35:00.784249 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:35:00.789456 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:35:00.791590 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:35:00.795339 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:35:00.810033 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:35:00.830363 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:35:00.837967 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:35:00.869425 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:35:00.871164 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:35:00.875085 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:35:00.878576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:35:00.878908 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:35:00.882760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:35:00.884998 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:35:00.887830 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:35:00.890438 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:35:00.893131 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:35:00.896378 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:35:00.899357 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:35:00.902494 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:35:00.905356 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:35:00.908204 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:35:00.910958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:35:00.911255 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:35:00.914304 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:35:00.916204 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:35:00.918620 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:35:00.918949 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:35:00.921389 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:35:00.921667 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:35:00.925858 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:35:00.926311 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:35:00.929561 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:35:00.929950 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:35:00.941283 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:35:00.951243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:35:00.952462 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:35:00.952896 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:35:00.959940 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:35:00.960154 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:35:00.968131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:35:00.968478 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:35:00.976741 ignition[956]: INFO : Ignition 2.19.0 Jun 25 18:35:00.976741 ignition[956]: INFO : Stage: umount Jun 25 18:35:00.976741 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:35:00.976741 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:35:00.980611 ignition[956]: INFO : umount: umount passed Jun 25 18:35:00.980611 ignition[956]: INFO : Ignition finished successfully Jun 25 18:35:00.981922 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:35:00.982600 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:35:00.985316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:35:00.986685 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:35:00.987380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:35:00.988538 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:35:00.989149 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:35:00.990180 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:35:00.990753 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:35:00.991815 systemd[1]: Stopped target network.target - Network. Jun 25 18:35:00.992291 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:35:00.992339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:35:00.992912 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:35:00.993929 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:35:00.999794 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:35:01.000367 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:35:01.002027 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:35:01.003389 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:35:01.003463 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:35:01.004614 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:35:01.004649 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:35:01.005803 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:35:01.005846 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:35:01.007034 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:35:01.007072 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:35:01.008291 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:35:01.009455 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:35:01.010728 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:35:01.010818 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:35:01.012030 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:35:01.012099 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:35:01.013826 systemd-networkd[706]: eth0: DHCPv6 lease lost Jun 25 18:35:01.016006 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:35:01.016135 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:35:01.018114 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:35:01.018155 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:35:01.024858 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:35:01.025800 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:35:01.025852 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:35:01.027899 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:35:01.029938 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:35:01.030066 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:35:01.034412 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:35:01.034483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:35:01.036646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:35:01.036691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:35:01.037751 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:35:01.037792 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:35:01.041032 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:35:01.041200 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:35:01.042678 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:35:01.042898 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:35:01.043536 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:35:01.043568 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:35:01.044974 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:35:01.045018 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:35:01.046672 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:35:01.046751 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:35:01.047911 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:35:01.047956 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:35:01.057070 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:35:01.057626 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:35:01.057678 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:35:01.058253 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:35:01.058293 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:35:01.058893 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:35:01.058934 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:35:01.062522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:35:01.062566 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:35:01.065301 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:35:01.065398 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:35:01.066505 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:35:01.066586 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:35:01.068281 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:35:01.075910 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:35:01.091818 systemd[1]: Switching root. Jun 25 18:35:01.126654 systemd-journald[184]: Journal stopped Jun 25 18:35:04.863258 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jun 25 18:35:04.863310 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:35:04.863331 kernel: SELinux: policy capability open_perms=1 Jun 25 18:35:04.863346 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:35:04.863358 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:35:04.863369 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:35:04.863381 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:35:04.863393 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:35:04.863405 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:35:04.863419 kernel: audit: type=1403 audit(1719340502.554:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:35:04.863445 systemd[1]: Successfully loaded SELinux policy in 136.309ms. Jun 25 18:35:04.863463 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.481ms. Jun 25 18:35:04.863478 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:35:04.863491 systemd[1]: Detected virtualization kvm. Jun 25 18:35:04.863504 systemd[1]: Detected architecture x86-64. Jun 25 18:35:04.863517 systemd[1]: Detected first boot. Jun 25 18:35:04.863529 systemd[1]: Hostname set to . Jun 25 18:35:04.863542 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:35:04.863555 zram_generator::config[1015]: No configuration found. Jun 25 18:35:04.863573 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:35:04.863586 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:35:04.863599 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:35:04.863615 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:35:04.863628 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:35:04.863642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:35:04.863655 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:35:04.863668 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:35:04.863684 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:35:04.863697 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:35:04.863722 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:35:04.863738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:35:04.863751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:35:04.863764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:35:04.863776 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:35:04.863789 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:35:04.863805 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:35:04.863818 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:35:04.863831 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:35:04.863843 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:35:04.863856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:35:04.863872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:35:04.863886 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:35:04.863902 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:35:04.863915 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:35:04.863928 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:35:04.863943 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:35:04.863956 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:35:04.863969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:35:04.863982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:35:04.863995 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:35:04.864007 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:35:04.864021 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:35:04.864036 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:35:04.864050 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:35:04.864062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:35:04.864075 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:35:04.864089 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:35:04.864102 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:35:04.864114 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:35:04.864127 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:35:04.864144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:35:04.864157 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:35:04.864170 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:35:04.864183 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:35:04.864196 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:35:04.864209 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:35:04.864223 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:35:04.864237 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:35:04.864252 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 18:35:04.864266 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jun 25 18:35:04.864278 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:35:04.864291 kernel: fuse: init (API version 7.39) Jun 25 18:35:04.864303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:35:04.864316 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:35:04.864329 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:35:04.864341 kernel: loop: module loaded Jun 25 18:35:04.864353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:35:04.864369 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:35:04.864398 systemd-journald[1129]: Collecting audit messages is disabled. Jun 25 18:35:04.864423 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:35:04.864436 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:35:04.864449 systemd-journald[1129]: Journal started Jun 25 18:35:04.864475 systemd-journald[1129]: Runtime Journal (/run/log/journal/7791bedb3e844684b2db2d1992901620) is 4.9M, max 39.3M, 34.4M free. Jun 25 18:35:04.873523 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:35:04.867098 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:35:04.867669 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:35:04.868270 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:35:04.868883 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:35:04.869588 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:35:04.870328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:35:04.871043 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:35:04.871181 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:35:04.871952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:35:04.872091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:35:04.872841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:35:04.872975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:35:04.876188 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:35:04.876339 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:35:04.877079 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:35:04.877204 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:35:04.878366 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:35:04.879198 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:35:04.880678 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:35:04.891740 kernel: ACPI: bus type drm_connector registered Jun 25 18:35:04.893479 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:35:04.895944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:35:04.898553 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:35:04.905851 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:35:04.916856 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:35:04.919785 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:35:04.932926 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:35:04.935024 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:35:04.935955 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:35:04.946858 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:35:04.947486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:35:04.949170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:35:04.966201 systemd-journald[1129]: Time spent on flushing to /var/log/journal/7791bedb3e844684b2db2d1992901620 is 33.694ms for 928 entries. Jun 25 18:35:04.966201 systemd-journald[1129]: System Journal (/var/log/journal/7791bedb3e844684b2db2d1992901620) is 8.0M, max 584.8M, 576.8M free. Jun 25 18:35:05.048047 systemd-journald[1129]: Received client request to flush runtime journal. Jun 25 18:35:04.954227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:35:04.961450 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:35:04.964193 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:35:04.965632 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:35:04.973884 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:35:04.984654 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:35:04.985427 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:35:04.997247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:35:05.013917 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:35:05.017190 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jun 25 18:35:05.017205 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jun 25 18:35:05.021683 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:35:05.026902 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:35:05.052233 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:35:05.064850 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:35:05.075321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:35:05.091584 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jun 25 18:35:05.091612 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jun 25 18:35:05.096007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:35:05.670197 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:35:05.686907 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:35:05.738433 systemd-udevd[1201]: Using default interface naming scheme 'v255'. Jun 25 18:35:05.780696 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:35:05.791980 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:35:05.850954 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 18:35:05.864895 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:35:05.884734 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1217) Jun 25 18:35:05.895752 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 18:35:05.910747 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:35:05.923728 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1207) Jun 25 18:35:05.950355 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:35:05.963774 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 18:35:05.992090 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 18:35:05.997782 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:35:06.028636 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:35:06.050442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:35:06.058409 systemd-networkd[1205]: lo: Link UP Jun 25 18:35:06.064302 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 25 18:35:06.064331 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 25 18:35:06.058594 systemd-networkd[1205]: lo: Gained carrier Jun 25 18:35:06.060619 systemd-networkd[1205]: Enumeration completed Jun 25 18:35:06.061389 systemd-networkd[1205]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:35:06.061394 systemd-networkd[1205]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:35:06.063807 systemd-networkd[1205]: eth0: Link UP Jun 25 18:35:06.063811 systemd-networkd[1205]: eth0: Gained carrier Jun 25 18:35:06.063827 systemd-networkd[1205]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:35:06.066645 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:35:06.068329 kernel: Console: switching to colour dummy device 80x25 Jun 25 18:35:06.069756 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 25 18:35:06.069797 kernel: [drm] features: -context_init Jun 25 18:35:06.069855 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:35:06.071742 kernel: [drm] number of scanouts: 1 Jun 25 18:35:06.071826 kernel: [drm] number of cap sets: 0 Jun 25 18:35:06.074774 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 25 18:35:06.075782 systemd-networkd[1205]: eth0: DHCPv4 address 172.24.4.45/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 25 18:35:06.084041 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 25 18:35:06.088152 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:35:06.096456 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 25 18:35:06.097895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:35:06.098109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:35:06.107969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:35:06.114142 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:35:06.118149 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:35:06.139365 lvm[1242]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:35:06.172849 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:35:06.174260 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:35:06.178905 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:35:06.191157 lvm[1250]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:35:06.212026 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:35:06.215148 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:35:06.217898 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:35:06.217988 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:35:06.218988 systemd[1]: Reached target machines.target - Containers. Jun 25 18:35:06.223708 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:35:06.232896 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:35:06.234579 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:35:06.236108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:35:06.240493 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:35:06.245911 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:35:06.247980 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:35:06.250045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:35:06.252133 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:35:06.283407 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:35:06.290616 kernel: loop0: detected capacity change from 0 to 80568 Jun 25 18:35:06.294168 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:35:06.325564 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:35:06.327533 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:35:06.380866 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:35:06.423853 kernel: loop1: detected capacity change from 0 to 139760 Jun 25 18:35:06.492982 kernel: loop2: detected capacity change from 0 to 209816 Jun 25 18:35:06.573253 kernel: loop3: detected capacity change from 0 to 8 Jun 25 18:35:06.599556 kernel: loop4: detected capacity change from 0 to 80568 Jun 25 18:35:06.625867 kernel: loop5: detected capacity change from 0 to 139760 Jun 25 18:35:06.689665 kernel: loop6: detected capacity change from 0 to 209816 Jun 25 18:35:06.727762 kernel: loop7: detected capacity change from 0 to 8 Jun 25 18:35:06.730455 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 25 18:35:06.731995 (sd-merge)[1275]: Merged extensions into '/usr'. Jun 25 18:35:06.740705 systemd[1]: Reloading requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:35:06.740754 systemd[1]: Reloading... Jun 25 18:35:06.823892 zram_generator::config[1302]: No configuration found. Jun 25 18:35:06.992224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:35:07.069524 systemd[1]: Reloading finished in 328 ms. Jun 25 18:35:07.085566 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:35:07.099883 systemd[1]: Starting ensure-sysext.service... Jun 25 18:35:07.113869 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:35:07.121643 systemd[1]: Reloading requested from client PID 1363 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:35:07.121661 systemd[1]: Reloading... Jun 25 18:35:07.151910 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:35:07.152242 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:35:07.153113 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:35:07.153428 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Jun 25 18:35:07.153493 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Jun 25 18:35:07.161654 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:35:07.161668 systemd-tmpfiles[1364]: Skipping /boot Jun 25 18:35:07.177429 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:35:07.177442 systemd-tmpfiles[1364]: Skipping /boot Jun 25 18:35:07.184765 zram_generator::config[1389]: No configuration found. Jun 25 18:35:07.322594 ldconfig[1257]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:35:07.381690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:35:07.448185 systemd[1]: Reloading finished in 326 ms. Jun 25 18:35:07.463886 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:35:07.474287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:35:07.487922 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:35:07.496845 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:35:07.500411 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:35:07.515373 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:35:07.530185 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:35:07.545672 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:35:07.546391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:35:07.559750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:35:07.565134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:35:07.578116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:35:07.584117 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:35:07.584272 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:35:07.588411 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:35:07.589524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:35:07.589684 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:35:07.596104 augenrules[1484]: No rules Jun 25 18:35:07.597336 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:35:07.601615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:35:07.601803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:35:07.602660 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:35:07.602923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:35:07.618148 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:35:07.626771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:35:07.627362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:35:07.633276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:35:07.637881 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:35:07.649933 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:35:07.662124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:35:07.662842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:35:07.668792 systemd-networkd[1205]: eth0: Gained IPv6LL Jun 25 18:35:07.675012 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:35:07.677665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:35:07.682649 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:35:07.684494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:35:07.684656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:35:07.693282 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:35:07.697015 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:35:07.697923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:35:07.698065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:35:07.701371 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:35:07.701547 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:35:07.710096 systemd[1]: Finished ensure-sysext.service. Jun 25 18:35:07.716404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:35:07.716471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:35:07.723885 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:35:07.727316 systemd-resolved[1463]: Positive Trust Anchors: Jun 25 18:35:07.727639 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:35:07.728815 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:35:07.731050 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:35:07.739166 systemd-resolved[1463]: Using system hostname 'ci-4012-0-0-f-7092d20389.novalocal'. Jun 25 18:35:07.741091 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:35:07.744586 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:35:07.745729 systemd[1]: Reached target network.target - Network. Jun 25 18:35:07.748042 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:35:07.749198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:35:07.750373 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:35:07.805265 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:35:07.807388 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:35:07.810491 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:35:07.815212 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:35:07.816464 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:35:07.818981 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:35:07.819022 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:35:07.820663 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:35:07.822802 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:35:07.824779 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:35:07.825824 systemd-timesyncd[1518]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jun 25 18:35:07.825862 systemd-timesyncd[1518]: Initial clock synchronization to Tue 2024-06-25 18:35:07.701647 UTC. Jun 25 18:35:07.826886 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:35:07.832173 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:35:07.840350 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:35:07.847758 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:35:07.852918 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:35:07.854744 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:35:07.858291 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:35:07.860243 systemd[1]: System is tainted: cgroupsv1 Jun 25 18:35:07.860365 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:35:07.860461 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:35:07.865882 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:35:07.873865 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:35:07.885099 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:35:07.895985 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:35:07.909072 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:35:07.911076 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:35:07.919075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:35:07.922616 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:35:07.935781 jq[1532]: false Jun 25 18:35:07.933937 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:35:07.947450 extend-filesystems[1533]: Found loop4 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found loop5 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found loop6 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found loop7 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found vda Jun 25 18:35:07.953604 extend-filesystems[1533]: Found vda1 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found vda2 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found vda3 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found usr Jun 25 18:35:07.953604 extend-filesystems[1533]: Found vda4 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found vda6 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found vda7 Jun 25 18:35:07.953604 extend-filesystems[1533]: Found vda9 Jun 25 18:35:07.953604 extend-filesystems[1533]: Checking size of /dev/vda9 Jun 25 18:35:07.948176 dbus-daemon[1529]: [system] SELinux support is enabled Jun 25 18:35:07.954806 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:35:08.012661 extend-filesystems[1533]: Resized partition /dev/vda9 Jun 25 18:35:07.975890 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:35:07.990905 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:35:08.003847 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:35:08.017843 extend-filesystems[1557]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:35:08.018624 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:35:08.030321 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jun 25 18:35:08.030971 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:35:08.036376 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:35:08.037673 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:35:08.055669 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:35:08.056112 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:35:08.066190 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:35:08.066603 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:35:08.068796 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1216) Jun 25 18:35:08.069636 jq[1567]: true Jun 25 18:35:08.078089 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:35:08.086124 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:35:08.086472 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:35:08.113246 update_engine[1565]: I0625 18:35:08.110705 1565 main.cc:92] Flatcar Update Engine starting Jun 25 18:35:08.120592 jq[1575]: true Jun 25 18:35:08.128760 update_engine[1565]: I0625 18:35:08.128051 1565 update_check_scheduler.cc:74] Next update check in 4m23s Jun 25 18:35:08.129647 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:35:08.146980 tar[1573]: linux-amd64/helm Jun 25 18:35:08.160403 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:35:08.175214 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:35:08.178541 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:35:08.178571 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:35:08.179072 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:35:08.179089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:35:08.183651 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:35:08.190996 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:35:08.203795 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jun 25 18:35:08.269428 systemd-logind[1559]: New seat seat0. Jun 25 18:35:08.310420 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:35:08.310420 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 3 Jun 25 18:35:08.310420 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jun 25 18:35:08.309916 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:35:08.332857 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Jun 25 18:35:08.309934 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:35:08.310152 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:35:08.313574 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:35:08.313877 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:35:08.357072 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:35:08.360660 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:35:08.379281 systemd[1]: Starting sshkeys.service... Jun 25 18:35:08.397075 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 18:35:08.411265 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 18:35:08.422389 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:35:08.665784 containerd[1577]: time="2024-06-25T18:35:08.664445420Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:35:08.706356 containerd[1577]: time="2024-06-25T18:35:08.705344156Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:35:08.706356 containerd[1577]: time="2024-06-25T18:35:08.705391559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:35:08.707146 containerd[1577]: time="2024-06-25T18:35:08.707109393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:35:08.707183 containerd[1577]: time="2024-06-25T18:35:08.707145660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707446088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707471122Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707552297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707626075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707641521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707737916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707944770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707964083Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.707974894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.708094851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:35:08.708230 containerd[1577]: time="2024-06-25T18:35:08.708111136Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:35:08.708488 containerd[1577]: time="2024-06-25T18:35:08.708170425Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:35:08.708488 containerd[1577]: time="2024-06-25T18:35:08.708184510Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:35:08.724794 containerd[1577]: time="2024-06-25T18:35:08.724760805Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:35:08.724867 containerd[1577]: time="2024-06-25T18:35:08.724800426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:35:08.724867 containerd[1577]: time="2024-06-25T18:35:08.724818949Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:35:08.724867 containerd[1577]: time="2024-06-25T18:35:08.724853096Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.724870032Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.724882686Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.724896978Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725021967Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725040155Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725054673Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725069242Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725083869Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725103034Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725117326Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725133294Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725149076Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725163230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:35:08.725438 containerd[1577]: time="2024-06-25T18:35:08.725176259Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:35:08.725740 containerd[1577]: time="2024-06-25T18:35:08.725188628Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:35:08.725740 containerd[1577]: time="2024-06-25T18:35:08.725299788Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:35:08.725740 containerd[1577]: time="2024-06-25T18:35:08.725629481Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:35:08.725740 containerd[1577]: time="2024-06-25T18:35:08.725656349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.725740 containerd[1577]: time="2024-06-25T18:35:08.725670246Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:35:08.725740 containerd[1577]: time="2024-06-25T18:35:08.725692035Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:35:08.725882 containerd[1577]: time="2024-06-25T18:35:08.725775992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.725882 containerd[1577]: time="2024-06-25T18:35:08.725793410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.725882 containerd[1577]: time="2024-06-25T18:35:08.725808935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.725882 containerd[1577]: time="2024-06-25T18:35:08.725821333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.725882 containerd[1577]: time="2024-06-25T18:35:08.725834393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.725882 containerd[1577]: time="2024-06-25T18:35:08.725847678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.725882 containerd[1577]: time="2024-06-25T18:35:08.725860264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.725882 containerd[1577]: time="2024-06-25T18:35:08.725873767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.726037 containerd[1577]: time="2024-06-25T18:35:08.725888306Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:35:08.726059 containerd[1577]: time="2024-06-25T18:35:08.726037292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.726083 containerd[1577]: time="2024-06-25T18:35:08.726057068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.726083 containerd[1577]: time="2024-06-25T18:35:08.726072711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.726121 containerd[1577]: time="2024-06-25T18:35:08.726086667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.726121 containerd[1577]: time="2024-06-25T18:35:08.726099934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.726164 containerd[1577]: time="2024-06-25T18:35:08.726119552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.726164 containerd[1577]: time="2024-06-25T18:35:08.726133480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.726164 containerd[1577]: time="2024-06-25T18:35:08.726146834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:35:08.727337 containerd[1577]: time="2024-06-25T18:35:08.726399041Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:35:08.727337 containerd[1577]: time="2024-06-25T18:35:08.726473125Z" level=info msg="Connect containerd service" Jun 25 18:35:08.727337 containerd[1577]: time="2024-06-25T18:35:08.726500633Z" level=info msg="using legacy CRI server" Jun 25 18:35:08.727337 containerd[1577]: time="2024-06-25T18:35:08.726507735Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:35:08.727337 containerd[1577]: time="2024-06-25T18:35:08.726581296Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:35:08.733667 containerd[1577]: time="2024-06-25T18:35:08.733234719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:35:08.733667 containerd[1577]: time="2024-06-25T18:35:08.733292923Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:35:08.733667 containerd[1577]: time="2024-06-25T18:35:08.733323460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:35:08.733667 containerd[1577]: time="2024-06-25T18:35:08.733343186Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:35:08.733667 containerd[1577]: time="2024-06-25T18:35:08.733363396Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:35:08.733832 containerd[1577]: time="2024-06-25T18:35:08.733673412Z" level=info msg="Start subscribing containerd event" Jun 25 18:35:08.733832 containerd[1577]: time="2024-06-25T18:35:08.733748817Z" level=info msg="Start recovering state" Jun 25 18:35:08.733832 containerd[1577]: time="2024-06-25T18:35:08.733827546Z" level=info msg="Start event monitor" Jun 25 18:35:08.733895 containerd[1577]: time="2024-06-25T18:35:08.733845666Z" level=info msg="Start snapshots syncer" Jun 25 18:35:08.733895 containerd[1577]: time="2024-06-25T18:35:08.733861871Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:35:08.733895 containerd[1577]: time="2024-06-25T18:35:08.733871547Z" level=info msg="Start streaming server" Jun 25 18:35:08.736551 containerd[1577]: time="2024-06-25T18:35:08.734149733Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:35:08.736551 containerd[1577]: time="2024-06-25T18:35:08.734225474Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:35:08.736551 containerd[1577]: time="2024-06-25T18:35:08.734282810Z" level=info msg="containerd successfully booted in 0.071733s" Jun 25 18:35:08.734373 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:35:08.815917 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:35:08.845376 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:35:08.857329 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:35:08.864052 systemd[1]: Started sshd@0-172.24.4.45:22-172.24.4.1:58432.service - OpenSSH per-connection server daemon (172.24.4.1:58432). Jun 25 18:35:08.881075 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:35:08.881300 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:35:08.901118 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:35:08.931910 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:35:08.941287 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:35:08.955125 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:35:08.960237 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:35:09.058389 tar[1573]: linux-amd64/LICENSE Jun 25 18:35:09.058524 tar[1573]: linux-amd64/README.md Jun 25 18:35:09.069462 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:35:09.849979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:35:09.857962 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:35:10.014233 sshd[1638]: Accepted publickey for core from 172.24.4.1 port 58432 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:10.016435 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:10.041082 systemd-logind[1559]: New session 1 of user core. Jun 25 18:35:10.042186 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:35:10.053103 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:35:10.072267 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:35:10.084080 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:35:10.107564 (systemd)[1669]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:10.242136 systemd[1669]: Queued start job for default target default.target. Jun 25 18:35:10.242864 systemd[1669]: Created slice app.slice - User Application Slice. Jun 25 18:35:10.243215 systemd[1669]: Reached target paths.target - Paths. Jun 25 18:35:10.243232 systemd[1669]: Reached target timers.target - Timers. Jun 25 18:35:10.253807 systemd[1669]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:35:10.262915 systemd[1669]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:35:10.262986 systemd[1669]: Reached target sockets.target - Sockets. Jun 25 18:35:10.263001 systemd[1669]: Reached target basic.target - Basic System. Jun 25 18:35:10.263053 systemd[1669]: Reached target default.target - Main User Target. Jun 25 18:35:10.263078 systemd[1669]: Startup finished in 141ms. Jun 25 18:35:10.263410 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:35:10.280091 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:35:10.647294 systemd[1]: Started sshd@1-172.24.4.45:22-172.24.4.1:58446.service - OpenSSH per-connection server daemon (172.24.4.1:58446). Jun 25 18:35:11.081927 kubelet[1664]: E0625 18:35:11.081813 1664 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:35:11.086746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:35:11.086920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:35:11.845395 sshd[1686]: Accepted publickey for core from 172.24.4.1 port 58446 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:11.848004 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:11.855271 systemd-logind[1559]: New session 2 of user core. Jun 25 18:35:11.866039 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:35:12.541677 sshd[1686]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:12.552476 systemd[1]: Started sshd@2-172.24.4.45:22-172.24.4.1:58460.service - OpenSSH per-connection server daemon (172.24.4.1:58460). Jun 25 18:35:12.560066 systemd[1]: sshd@1-172.24.4.45:22-172.24.4.1:58446.service: Deactivated successfully. Jun 25 18:35:12.569759 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:35:12.571040 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:35:12.573279 systemd-logind[1559]: Removed session 2. Jun 25 18:35:13.757944 sshd[1696]: Accepted publickey for core from 172.24.4.1 port 58460 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:13.760975 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:13.765354 systemd-logind[1559]: New session 3 of user core. Jun 25 18:35:13.776125 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:35:14.002596 login[1648]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:35:14.012485 login[1649]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:35:14.014983 systemd-logind[1559]: New session 4 of user core. Jun 25 18:35:14.025377 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:35:14.034121 systemd-logind[1559]: New session 5 of user core. Jun 25 18:35:14.042661 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:35:14.353087 sshd[1696]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:14.359569 systemd[1]: sshd@2-172.24.4.45:22-172.24.4.1:58460.service: Deactivated successfully. Jun 25 18:35:14.366639 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:35:14.368314 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:35:14.371177 systemd-logind[1559]: Removed session 3. Jun 25 18:35:14.966390 coreos-metadata[1528]: Jun 25 18:35:14.966 WARN failed to locate config-drive, using the metadata service API instead Jun 25 18:35:15.013901 coreos-metadata[1528]: Jun 25 18:35:15.013 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 25 18:35:15.444152 coreos-metadata[1528]: Jun 25 18:35:15.444 INFO Fetch successful Jun 25 18:35:15.444152 coreos-metadata[1528]: Jun 25 18:35:15.444 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 18:35:15.460758 coreos-metadata[1528]: Jun 25 18:35:15.460 INFO Fetch successful Jun 25 18:35:15.460874 coreos-metadata[1528]: Jun 25 18:35:15.460 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 25 18:35:15.474802 coreos-metadata[1528]: Jun 25 18:35:15.474 INFO Fetch successful Jun 25 18:35:15.474914 coreos-metadata[1528]: Jun 25 18:35:15.474 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 25 18:35:15.489438 coreos-metadata[1528]: Jun 25 18:35:15.489 INFO Fetch successful Jun 25 18:35:15.489577 coreos-metadata[1528]: Jun 25 18:35:15.489 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 25 18:35:15.506915 coreos-metadata[1528]: Jun 25 18:35:15.506 INFO Fetch successful Jun 25 18:35:15.506915 coreos-metadata[1528]: Jun 25 18:35:15.506 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 25 18:35:15.519564 coreos-metadata[1616]: Jun 25 18:35:15.519 WARN failed to locate config-drive, using the metadata service API instead Jun 25 18:35:15.523793 coreos-metadata[1528]: Jun 25 18:35:15.521 INFO Fetch successful Jun 25 18:35:15.562762 coreos-metadata[1616]: Jun 25 18:35:15.562 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 25 18:35:15.579810 coreos-metadata[1616]: Jun 25 18:35:15.579 INFO Fetch successful Jun 25 18:35:15.579810 coreos-metadata[1616]: Jun 25 18:35:15.579 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 18:35:15.583957 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:35:15.585791 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:35:15.594940 coreos-metadata[1616]: Jun 25 18:35:15.594 INFO Fetch successful Jun 25 18:35:15.597588 unknown[1616]: wrote ssh authorized keys file for user: core Jun 25 18:35:15.631910 update-ssh-keys[1739]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:35:15.632789 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 18:35:15.646387 systemd[1]: Finished sshkeys.service. Jun 25 18:35:15.648552 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:35:15.649934 systemd[1]: Startup finished in 19.840s (kernel) + 13.231s (userspace) = 33.071s. Jun 25 18:35:21.250242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:35:21.259028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:35:21.578544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:35:21.589022 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:35:21.788455 kubelet[1759]: E0625 18:35:21.788328 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:35:21.796847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:35:21.797004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:35:24.347581 systemd[1]: Started sshd@3-172.24.4.45:22-172.24.4.1:41014.service - OpenSSH per-connection server daemon (172.24.4.1:41014). Jun 25 18:35:25.768681 sshd[1769]: Accepted publickey for core from 172.24.4.1 port 41014 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:25.771218 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:25.782265 systemd-logind[1559]: New session 6 of user core. Jun 25 18:35:25.793315 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:35:26.540091 sshd[1769]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:26.552884 systemd[1]: Started sshd@4-172.24.4.45:22-172.24.4.1:49792.service - OpenSSH per-connection server daemon (172.24.4.1:49792). Jun 25 18:35:26.557505 systemd[1]: sshd@3-172.24.4.45:22-172.24.4.1:41014.service: Deactivated successfully. Jun 25 18:35:26.565409 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:35:26.569183 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:35:26.574115 systemd-logind[1559]: Removed session 6. Jun 25 18:35:28.144591 sshd[1774]: Accepted publickey for core from 172.24.4.1 port 49792 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:28.147290 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:28.160972 systemd-logind[1559]: New session 7 of user core. Jun 25 18:35:28.172239 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:35:28.955076 sshd[1774]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:28.969560 systemd[1]: Started sshd@5-172.24.4.45:22-172.24.4.1:49806.service - OpenSSH per-connection server daemon (172.24.4.1:49806). Jun 25 18:35:28.972083 systemd[1]: sshd@4-172.24.4.45:22-172.24.4.1:49792.service: Deactivated successfully. Jun 25 18:35:28.981445 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:35:28.985263 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:35:28.987815 systemd-logind[1559]: Removed session 7. Jun 25 18:35:30.165156 sshd[1782]: Accepted publickey for core from 172.24.4.1 port 49806 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:30.167755 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:30.180595 systemd-logind[1559]: New session 8 of user core. Jun 25 18:35:30.190459 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:35:30.961112 sshd[1782]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:30.975125 systemd[1]: Started sshd@6-172.24.4.45:22-172.24.4.1:49810.service - OpenSSH per-connection server daemon (172.24.4.1:49810). Jun 25 18:35:30.975760 systemd[1]: sshd@5-172.24.4.45:22-172.24.4.1:49806.service: Deactivated successfully. Jun 25 18:35:30.980028 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:35:30.981213 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:35:30.983776 systemd-logind[1559]: Removed session 8. Jun 25 18:35:32.001185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:35:32.019412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:35:32.240164 sshd[1790]: Accepted publickey for core from 172.24.4.1 port 49810 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:32.243187 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:32.253374 systemd-logind[1559]: New session 9 of user core. Jun 25 18:35:32.270372 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:35:32.433153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:35:32.454405 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:35:32.641559 kubelet[1809]: E0625 18:35:32.641379 1809 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:35:32.644006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:35:32.644494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:35:32.752080 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:35:32.752679 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:35:32.773934 sudo[1818]: pam_unix(sudo:session): session closed for user root Jun 25 18:35:33.003069 sshd[1790]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:33.013361 systemd[1]: Started sshd@7-172.24.4.45:22-172.24.4.1:49826.service - OpenSSH per-connection server daemon (172.24.4.1:49826). Jun 25 18:35:33.016470 systemd[1]: sshd@6-172.24.4.45:22-172.24.4.1:49810.service: Deactivated successfully. Jun 25 18:35:33.024693 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:35:33.025037 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:35:33.038185 systemd-logind[1559]: Removed session 9. Jun 25 18:35:34.539780 sshd[1820]: Accepted publickey for core from 172.24.4.1 port 49826 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:34.541408 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:34.547784 systemd-logind[1559]: New session 10 of user core. Jun 25 18:35:34.554039 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:35:35.061635 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:35:35.062610 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:35:35.080856 sudo[1828]: pam_unix(sudo:session): session closed for user root Jun 25 18:35:35.094995 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:35:35.095924 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:35:35.135309 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:35:35.156886 auditctl[1831]: No rules Jun 25 18:35:35.158271 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:35:35.159101 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:35:35.175823 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:35:35.233573 augenrules[1850]: No rules Jun 25 18:35:35.235112 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:35:35.237951 sudo[1827]: pam_unix(sudo:session): session closed for user root Jun 25 18:35:35.496691 sshd[1820]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:35.511419 systemd[1]: Started sshd@8-172.24.4.45:22-172.24.4.1:43476.service - OpenSSH per-connection server daemon (172.24.4.1:43476). Jun 25 18:35:35.530492 systemd[1]: sshd@7-172.24.4.45:22-172.24.4.1:49826.service: Deactivated successfully. Jun 25 18:35:35.547091 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:35:35.553290 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:35:35.557113 systemd-logind[1559]: Removed session 10. Jun 25 18:35:36.580951 sshd[1856]: Accepted publickey for core from 172.24.4.1 port 43476 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:35:36.584116 sshd[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:36.595245 systemd-logind[1559]: New session 11 of user core. Jun 25 18:35:36.610402 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:35:37.114022 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:35:37.114679 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:35:37.378416 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:35:37.382579 (dockerd)[1872]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:35:37.896022 dockerd[1872]: time="2024-06-25T18:35:37.895925009Z" level=info msg="Starting up" Jun 25 18:35:38.299087 dockerd[1872]: time="2024-06-25T18:35:38.299020842Z" level=info msg="Loading containers: start." Jun 25 18:35:38.478790 kernel: Initializing XFRM netlink socket Jun 25 18:35:38.599773 systemd-networkd[1205]: docker0: Link UP Jun 25 18:35:38.619926 dockerd[1872]: time="2024-06-25T18:35:38.619877336Z" level=info msg="Loading containers: done." Jun 25 18:35:38.778759 dockerd[1872]: time="2024-06-25T18:35:38.778629158Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:35:38.779161 dockerd[1872]: time="2024-06-25T18:35:38.779088750Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:35:38.779430 dockerd[1872]: time="2024-06-25T18:35:38.779367343Z" level=info msg="Daemon has completed initialization" Jun 25 18:35:38.835252 dockerd[1872]: time="2024-06-25T18:35:38.835178629Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:35:38.836754 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:35:40.975290 containerd[1577]: time="2024-06-25T18:35:40.974536206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:35:41.951168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849088783.mount: Deactivated successfully. Jun 25 18:35:42.750821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:35:42.762148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:35:42.935909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:35:42.946060 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:35:43.288039 kubelet[2066]: E0625 18:35:43.287948 2066 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:35:43.293384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:35:43.295248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:35:45.141184 containerd[1577]: time="2024-06-25T18:35:45.141078395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:45.155770 containerd[1577]: time="2024-06-25T18:35:45.155620786Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605186" Jun 25 18:35:45.175198 containerd[1577]: time="2024-06-25T18:35:45.175025308Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:45.184792 containerd[1577]: time="2024-06-25T18:35:45.184574566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:45.188866 containerd[1577]: time="2024-06-25T18:35:45.188045338Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 4.213399955s" Jun 25 18:35:45.188866 containerd[1577]: time="2024-06-25T18:35:45.188137898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 18:35:45.241523 containerd[1577]: time="2024-06-25T18:35:45.241426087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:35:48.671630 containerd[1577]: time="2024-06-25T18:35:48.671261776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:48.675335 containerd[1577]: time="2024-06-25T18:35:48.675114248Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719499" Jun 25 18:35:48.677907 containerd[1577]: time="2024-06-25T18:35:48.677772109Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:48.691002 containerd[1577]: time="2024-06-25T18:35:48.690780963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:48.694606 containerd[1577]: time="2024-06-25T18:35:48.693485370Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 3.451612778s" Jun 25 18:35:48.694606 containerd[1577]: time="2024-06-25T18:35:48.693575496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 18:35:48.746610 containerd[1577]: time="2024-06-25T18:35:48.746514128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:35:51.966956 containerd[1577]: time="2024-06-25T18:35:51.966779705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:51.969878 containerd[1577]: time="2024-06-25T18:35:51.969750536Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925513" Jun 25 18:35:51.971812 containerd[1577]: time="2024-06-25T18:35:51.971666329Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:51.982223 containerd[1577]: time="2024-06-25T18:35:51.982064577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:51.985211 containerd[1577]: time="2024-06-25T18:35:51.984938809Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 3.23834261s" Jun 25 18:35:51.985211 containerd[1577]: time="2024-06-25T18:35:51.985024939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 18:35:52.043374 containerd[1577]: time="2024-06-25T18:35:52.042826851Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:35:53.501199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 18:35:53.509967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:35:53.594157 update_engine[1565]: I0625 18:35:53.593791 1565 update_attempter.cc:509] Updating boot flags... Jun 25 18:35:53.689393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3870414640.mount: Deactivated successfully. Jun 25 18:35:54.093039 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2115) Jun 25 18:35:54.127901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:35:54.139582 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:35:54.200698 kubelet[2129]: E0625 18:35:54.200634 2129 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:35:54.204880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:35:54.205047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:35:54.242895 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2117) Jun 25 18:35:54.300750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2117) Jun 25 18:35:54.763551 containerd[1577]: time="2024-06-25T18:35:54.763490770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:54.765241 containerd[1577]: time="2024-06-25T18:35:54.765176381Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118427" Jun 25 18:35:54.766808 containerd[1577]: time="2024-06-25T18:35:54.766771715Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:54.770282 containerd[1577]: time="2024-06-25T18:35:54.770241010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:54.771296 containerd[1577]: time="2024-06-25T18:35:54.770823653Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.727923285s" Jun 25 18:35:54.771296 containerd[1577]: time="2024-06-25T18:35:54.770863968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 18:35:54.796457 containerd[1577]: time="2024-06-25T18:35:54.796349416Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:35:55.648050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778558562.mount: Deactivated successfully. Jun 25 18:35:55.782299 containerd[1577]: time="2024-06-25T18:35:55.782134107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:55.784466 containerd[1577]: time="2024-06-25T18:35:55.784381535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 18:35:55.789914 containerd[1577]: time="2024-06-25T18:35:55.789773507Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:55.795628 containerd[1577]: time="2024-06-25T18:35:55.795280993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:35:55.797978 containerd[1577]: time="2024-06-25T18:35:55.797663431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.001260216s" Jun 25 18:35:55.797978 containerd[1577]: time="2024-06-25T18:35:55.797784326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:35:55.856603 containerd[1577]: time="2024-06-25T18:35:55.856186711Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:35:57.164674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089247403.mount: Deactivated successfully. Jun 25 18:36:01.248699 containerd[1577]: time="2024-06-25T18:36:01.248554996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:01.250072 containerd[1577]: time="2024-06-25T18:36:01.250004937Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jun 25 18:36:01.252393 containerd[1577]: time="2024-06-25T18:36:01.252345327Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:01.256602 containerd[1577]: time="2024-06-25T18:36:01.256543638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:01.258173 containerd[1577]: time="2024-06-25T18:36:01.258117129Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.401760072s" Jun 25 18:36:01.258235 containerd[1577]: time="2024-06-25T18:36:01.258175909Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:36:01.286654 containerd[1577]: time="2024-06-25T18:36:01.286372003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:36:02.001325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781931715.mount: Deactivated successfully. Jun 25 18:36:04.250264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 18:36:04.259083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:05.213137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:05.235504 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:36:05.286360 containerd[1577]: time="2024-06-25T18:36:05.286218494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:05.291486 containerd[1577]: time="2024-06-25T18:36:05.291309168Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Jun 25 18:36:05.294481 containerd[1577]: time="2024-06-25T18:36:05.294379473Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:05.313596 containerd[1577]: time="2024-06-25T18:36:05.313452271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:05.317546 containerd[1577]: time="2024-06-25T18:36:05.317440107Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 4.030994967s" Jun 25 18:36:05.317676 containerd[1577]: time="2024-06-25T18:36:05.317556625Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 18:36:05.444363 kubelet[2231]: E0625 18:36:05.444253 2231 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:36:05.447323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:36:05.448292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:36:09.682077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:09.700360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:09.741585 systemd[1]: Reloading requested from client PID 2301 ('systemctl') (unit session-11.scope)... Jun 25 18:36:09.741791 systemd[1]: Reloading... Jun 25 18:36:09.853985 zram_generator::config[2335]: No configuration found. Jun 25 18:36:10.019348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:36:10.098032 systemd[1]: Reloading finished in 355 ms. Jun 25 18:36:10.160313 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:36:10.160605 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:36:10.161210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:10.170169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:10.306390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:10.320286 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:36:10.894873 kubelet[2417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:36:10.896327 kubelet[2417]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:36:10.896327 kubelet[2417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:36:10.896327 kubelet[2417]: I0625 18:36:10.895413 2417 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:36:11.268783 kubelet[2417]: I0625 18:36:11.268687 2417 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:36:11.268783 kubelet[2417]: I0625 18:36:11.268739 2417 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:36:11.269046 kubelet[2417]: I0625 18:36:11.268994 2417 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:36:11.390375 kubelet[2417]: E0625 18:36:11.389943 2417 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.390375 kubelet[2417]: I0625 18:36:11.389991 2417 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:36:11.436788 kubelet[2417]: I0625 18:36:11.436675 2417 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:36:11.451013 kubelet[2417]: I0625 18:36:11.450124 2417 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:36:11.451013 kubelet[2417]: I0625 18:36:11.450599 2417 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:36:11.451013 kubelet[2417]: I0625 18:36:11.450644 2417 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:36:11.451013 kubelet[2417]: I0625 18:36:11.450670 2417 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:36:11.461972 kubelet[2417]: I0625 18:36:11.461580 2417 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:36:11.468692 kubelet[2417]: W0625 18:36:11.468523 2417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-f-7092d20389.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.468692 kubelet[2417]: E0625 18:36:11.468650 2417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-f-7092d20389.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.479567 kubelet[2417]: I0625 18:36:11.479468 2417 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:36:11.479567 kubelet[2417]: I0625 18:36:11.479538 2417 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:36:11.480920 kubelet[2417]: I0625 18:36:11.479608 2417 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:36:11.480920 kubelet[2417]: I0625 18:36:11.479642 2417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:36:11.491471 kubelet[2417]: W0625 18:36:11.491359 2417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.491611 kubelet[2417]: E0625 18:36:11.491484 2417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.495777 kubelet[2417]: I0625 18:36:11.493878 2417 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:36:11.526315 kubelet[2417]: W0625 18:36:11.526170 2417 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:36:11.528633 kubelet[2417]: I0625 18:36:11.528582 2417 server.go:1232] "Started kubelet" Jun 25 18:36:11.577821 kubelet[2417]: I0625 18:36:11.577299 2417 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:36:11.579332 kubelet[2417]: I0625 18:36:11.579263 2417 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:36:11.584526 kubelet[2417]: I0625 18:36:11.584404 2417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:36:11.631871 kubelet[2417]: I0625 18:36:11.631791 2417 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:36:11.633966 kubelet[2417]: I0625 18:36:11.632321 2417 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:36:11.633966 kubelet[2417]: I0625 18:36:11.632702 2417 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:36:11.633966 kubelet[2417]: I0625 18:36:11.632912 2417 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:36:11.633966 kubelet[2417]: I0625 18:36:11.633038 2417 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:36:11.633966 kubelet[2417]: E0625 18:36:11.633584 2417 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012-0-0-f-7092d20389.novalocal.17dc5323ede00ddd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012-0-0-f-7092d20389.novalocal", UID:"ci-4012-0-0-f-7092d20389.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012-0-0-f-7092d20389.novalocal"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 36, 11, 528531421, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 36, 11, 528531421, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012-0-0-f-7092d20389.novalocal"}': 'Post "https://172.24.4.45:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.45:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:36:11.634472 kubelet[2417]: E0625 18:36:11.633950 2417 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-f-7092d20389.novalocal?timeout=10s\": dial tcp 172.24.4.45:6443: connect: connection refused" interval="200ms" Jun 25 18:36:11.637774 kubelet[2417]: W0625 18:36:11.635497 2417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.637774 kubelet[2417]: E0625 18:36:11.635598 2417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.646765 kubelet[2417]: E0625 18:36:11.645444 2417 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:36:11.646765 kubelet[2417]: E0625 18:36:11.645513 2417 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:36:11.684817 kubelet[2417]: I0625 18:36:11.684697 2417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:36:11.687494 kubelet[2417]: I0625 18:36:11.687443 2417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:36:11.687613 kubelet[2417]: I0625 18:36:11.687507 2417 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:36:11.687613 kubelet[2417]: I0625 18:36:11.687543 2417 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:36:11.687765 kubelet[2417]: E0625 18:36:11.687646 2417 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:36:11.700811 kubelet[2417]: W0625 18:36:11.700763 2417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.701111 kubelet[2417]: E0625 18:36:11.701095 2417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:11.740686 kubelet[2417]: I0625 18:36:11.740645 2417 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.741085 kubelet[2417]: E0625 18:36:11.741065 2417 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.45:6443/api/v1/nodes\": dial tcp 172.24.4.45:6443: connect: connection refused" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.743303 kubelet[2417]: E0625 18:36:11.743134 2417 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012-0-0-f-7092d20389.novalocal.17dc5323ede00ddd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012-0-0-f-7092d20389.novalocal", UID:"ci-4012-0-0-f-7092d20389.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012-0-0-f-7092d20389.novalocal"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 36, 11, 528531421, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 36, 11, 528531421, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012-0-0-f-7092d20389.novalocal"}': 'Post "https://172.24.4.45:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.45:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:36:11.743453 kubelet[2417]: I0625 18:36:11.743339 2417 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:36:11.743453 kubelet[2417]: I0625 18:36:11.743351 2417 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:36:11.743453 kubelet[2417]: I0625 18:36:11.743378 2417 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:36:11.750395 kubelet[2417]: I0625 18:36:11.750355 2417 policy_none.go:49] "None policy: Start" Jun 25 18:36:11.751322 kubelet[2417]: I0625 18:36:11.751218 2417 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:36:11.751375 kubelet[2417]: I0625 18:36:11.751328 2417 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:36:11.757834 kubelet[2417]: I0625 18:36:11.757804 2417 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:36:11.758664 kubelet[2417]: I0625 18:36:11.758037 2417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:36:11.761385 kubelet[2417]: E0625 18:36:11.761359 2417 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012-0-0-f-7092d20389.novalocal\" not found" Jun 25 18:36:11.789009 kubelet[2417]: I0625 18:36:11.788849 2417 topology_manager.go:215] "Topology Admit Handler" podUID="da5fdabcb89c1c773c31f7824d5f4c9a" podNamespace="kube-system" podName="kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.793336 kubelet[2417]: I0625 18:36:11.793284 2417 topology_manager.go:215] "Topology Admit Handler" podUID="c2c553804d446708aab7f0115bf622ff" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.796995 kubelet[2417]: I0625 18:36:11.796823 2417 topology_manager.go:215] "Topology Admit Handler" podUID="3f8f09154fcb7bd640b4de5f6f925e34" podNamespace="kube-system" podName="kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.835361 kubelet[2417]: E0625 18:36:11.835313 2417 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-f-7092d20389.novalocal?timeout=10s\": dial tcp 172.24.4.45:6443: connect: connection refused" interval="400ms" Jun 25 18:36:11.934054 kubelet[2417]: I0625 18:36:11.933816 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-kubeconfig\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.934054 kubelet[2417]: I0625 18:36:11.933915 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.934054 kubelet[2417]: I0625 18:36:11.934001 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f8f09154fcb7bd640b4de5f6f925e34-kubeconfig\") pod \"kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"3f8f09154fcb7bd640b4de5f6f925e34\") " pod="kube-system/kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.934054 kubelet[2417]: I0625 18:36:11.934059 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da5fdabcb89c1c773c31f7824d5f4c9a-ca-certs\") pod \"kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"da5fdabcb89c1c773c31f7824d5f4c9a\") " pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.935280 kubelet[2417]: I0625 18:36:11.934117 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-ca-certs\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.935280 kubelet[2417]: I0625 18:36:11.934168 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-k8s-certs\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.935280 kubelet[2417]: I0625 18:36:11.934222 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da5fdabcb89c1c773c31f7824d5f4c9a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"da5fdabcb89c1c773c31f7824d5f4c9a\") " pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.935280 kubelet[2417]: I0625 18:36:11.934276 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.935534 kubelet[2417]: I0625 18:36:11.934328 2417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da5fdabcb89c1c773c31f7824d5f4c9a-k8s-certs\") pod \"kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"da5fdabcb89c1c773c31f7824d5f4c9a\") " pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.945019 kubelet[2417]: I0625 18:36:11.944981 2417 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:11.945558 kubelet[2417]: E0625 18:36:11.945525 2417 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.45:6443/api/v1/nodes\": dial tcp 172.24.4.45:6443: connect: connection refused" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:12.105566 containerd[1577]: time="2024-06-25T18:36:12.105056165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal,Uid:da5fdabcb89c1c773c31f7824d5f4c9a,Namespace:kube-system,Attempt:0,}" Jun 25 18:36:12.118188 containerd[1577]: time="2024-06-25T18:36:12.117342698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal,Uid:3f8f09154fcb7bd640b4de5f6f925e34,Namespace:kube-system,Attempt:0,}" Jun 25 18:36:12.118188 containerd[1577]: time="2024-06-25T18:36:12.117376380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal,Uid:c2c553804d446708aab7f0115bf622ff,Namespace:kube-system,Attempt:0,}" Jun 25 18:36:12.237144 kubelet[2417]: E0625 18:36:12.237052 2417 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-f-7092d20389.novalocal?timeout=10s\": dial tcp 172.24.4.45:6443: connect: connection refused" interval="800ms" Jun 25 18:36:12.352784 kubelet[2417]: I0625 18:36:12.351615 2417 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:12.353393 kubelet[2417]: E0625 18:36:12.353168 2417 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.45:6443/api/v1/nodes\": dial tcp 172.24.4.45:6443: connect: connection refused" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:12.437842 systemd-journald[1129]: Under memory pressure, flushing caches. Jun 25 18:36:12.435455 systemd-resolved[1463]: Under memory pressure, flushing caches. Jun 25 18:36:12.435530 systemd-resolved[1463]: Flushed all caches. Jun 25 18:36:12.462450 kubelet[2417]: W0625 18:36:12.462259 2417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:12.462935 kubelet[2417]: E0625 18:36:12.462805 2417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:12.605131 kubelet[2417]: W0625 18:36:12.605021 2417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:12.605131 kubelet[2417]: E0625 18:36:12.605145 2417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:12.724928 kubelet[2417]: W0625 18:36:12.724633 2417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-f-7092d20389.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:12.724928 kubelet[2417]: E0625 18:36:12.724852 2417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-f-7092d20389.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:13.000866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760715692.mount: Deactivated successfully. Jun 25 18:36:13.014887 containerd[1577]: time="2024-06-25T18:36:13.014530462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:36:13.017621 containerd[1577]: time="2024-06-25T18:36:13.016970719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:36:13.020893 containerd[1577]: time="2024-06-25T18:36:13.020803586Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:36:13.023091 containerd[1577]: time="2024-06-25T18:36:13.022908999Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:36:13.033107 containerd[1577]: time="2024-06-25T18:36:13.032861084Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:36:13.034895 containerd[1577]: time="2024-06-25T18:36:13.034571249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:36:13.036677 containerd[1577]: time="2024-06-25T18:36:13.036590459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 18:36:13.038993 kubelet[2417]: E0625 18:36:13.038897 2417 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-f-7092d20389.novalocal?timeout=10s\": dial tcp 172.24.4.45:6443: connect: connection refused" interval="1.6s" Jun 25 18:36:13.043280 containerd[1577]: time="2024-06-25T18:36:13.043032039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:36:13.049222 containerd[1577]: time="2024-06-25T18:36:13.048910046Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 931.408412ms" Jun 25 18:36:13.055419 containerd[1577]: time="2024-06-25T18:36:13.054873243Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 936.879901ms" Jun 25 18:36:13.056266 containerd[1577]: time="2024-06-25T18:36:13.056177198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 946.268311ms" Jun 25 18:36:13.175151 kubelet[2417]: W0625 18:36:13.174531 2417 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:13.175151 kubelet[2417]: E0625 18:36:13.174637 2417 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:13.175826 kubelet[2417]: I0625 18:36:13.175798 2417 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:13.176343 kubelet[2417]: E0625 18:36:13.176320 2417 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.45:6443/api/v1/nodes\": dial tcp 172.24.4.45:6443: connect: connection refused" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:13.306374 containerd[1577]: time="2024-06-25T18:36:13.305942393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:36:13.307198 containerd[1577]: time="2024-06-25T18:36:13.306059332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:13.307198 containerd[1577]: time="2024-06-25T18:36:13.307080970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:36:13.307198 containerd[1577]: time="2024-06-25T18:36:13.307131654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:13.312932 containerd[1577]: time="2024-06-25T18:36:13.311903666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:36:13.313844 containerd[1577]: time="2024-06-25T18:36:13.313766145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:13.313953 containerd[1577]: time="2024-06-25T18:36:13.313833220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:36:13.313953 containerd[1577]: time="2024-06-25T18:36:13.313871982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:13.317699 containerd[1577]: time="2024-06-25T18:36:13.317294413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:36:13.317699 containerd[1577]: time="2024-06-25T18:36:13.317360858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:13.317699 containerd[1577]: time="2024-06-25T18:36:13.317385724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:36:13.317699 containerd[1577]: time="2024-06-25T18:36:13.317403598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:13.416426 kubelet[2417]: E0625 18:36:13.416350 2417 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.45:6443: connect: connection refused Jun 25 18:36:13.430057 containerd[1577]: time="2024-06-25T18:36:13.429909576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal,Uid:da5fdabcb89c1c773c31f7824d5f4c9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"199973497e5ffcfc92490499e0b8e8198517d13c8c947b58ef4bf7cc61264ad4\"" Jun 25 18:36:13.431822 containerd[1577]: time="2024-06-25T18:36:13.431514383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal,Uid:c2c553804d446708aab7f0115bf622ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"32d0bd6b0ee6c84b3762fd777bf559cf12fc3eaf316d025ba7e7ae4a833017e4\"" Jun 25 18:36:13.436806 containerd[1577]: time="2024-06-25T18:36:13.435618438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal,Uid:3f8f09154fcb7bd640b4de5f6f925e34,Namespace:kube-system,Attempt:0,} returns sandbox id \"99a541b3b1ee029b8725d1d642bccf34b9abaea6d8cef32473d207127e13582e\"" Jun 25 18:36:13.438445 containerd[1577]: time="2024-06-25T18:36:13.438410101Z" level=info msg="CreateContainer within sandbox \"199973497e5ffcfc92490499e0b8e8198517d13c8c947b58ef4bf7cc61264ad4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:36:13.438782 containerd[1577]: time="2024-06-25T18:36:13.438641784Z" level=info msg="CreateContainer within sandbox \"32d0bd6b0ee6c84b3762fd777bf559cf12fc3eaf316d025ba7e7ae4a833017e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:36:13.439797 containerd[1577]: time="2024-06-25T18:36:13.439766063Z" level=info msg="CreateContainer within sandbox \"99a541b3b1ee029b8725d1d642bccf34b9abaea6d8cef32473d207127e13582e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:36:13.606797 containerd[1577]: time="2024-06-25T18:36:13.606443282Z" level=info msg="CreateContainer within sandbox \"199973497e5ffcfc92490499e0b8e8198517d13c8c947b58ef4bf7cc61264ad4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26a55dbc99e0e4803d6c5155cf04af9e04a469f563f154f97b70327774098c99\"" Jun 25 18:36:13.608585 containerd[1577]: time="2024-06-25T18:36:13.608512406Z" level=info msg="StartContainer for \"26a55dbc99e0e4803d6c5155cf04af9e04a469f563f154f97b70327774098c99\"" Jun 25 18:36:13.622246 containerd[1577]: time="2024-06-25T18:36:13.622136710Z" level=info msg="CreateContainer within sandbox \"32d0bd6b0ee6c84b3762fd777bf559cf12fc3eaf316d025ba7e7ae4a833017e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1ee925c048b69d52d3cc2a98efb4aa82e47092575fc2b1bf8fd88250cc66894e\"" Jun 25 18:36:13.632793 containerd[1577]: time="2024-06-25T18:36:13.632232093Z" level=info msg="StartContainer for \"1ee925c048b69d52d3cc2a98efb4aa82e47092575fc2b1bf8fd88250cc66894e\"" Jun 25 18:36:13.641781 containerd[1577]: time="2024-06-25T18:36:13.641658097Z" level=info msg="CreateContainer within sandbox \"99a541b3b1ee029b8725d1d642bccf34b9abaea6d8cef32473d207127e13582e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3ec0be68ed45af94020a1f54e2d74ef4993eab1369291a2482cc0df61533941\"" Jun 25 18:36:13.644901 containerd[1577]: time="2024-06-25T18:36:13.644850078Z" level=info msg="StartContainer for \"e3ec0be68ed45af94020a1f54e2d74ef4993eab1369291a2482cc0df61533941\"" Jun 25 18:36:13.788011 containerd[1577]: time="2024-06-25T18:36:13.787819898Z" level=info msg="StartContainer for \"26a55dbc99e0e4803d6c5155cf04af9e04a469f563f154f97b70327774098c99\" returns successfully" Jun 25 18:36:13.813381 containerd[1577]: time="2024-06-25T18:36:13.813106731Z" level=info msg="StartContainer for \"1ee925c048b69d52d3cc2a98efb4aa82e47092575fc2b1bf8fd88250cc66894e\" returns successfully" Jun 25 18:36:13.814171 containerd[1577]: time="2024-06-25T18:36:13.813631622Z" level=info msg="StartContainer for \"e3ec0be68ed45af94020a1f54e2d74ef4993eab1369291a2482cc0df61533941\" returns successfully" Jun 25 18:36:14.482953 systemd-resolved[1463]: Under memory pressure, flushing caches. Jun 25 18:36:14.482988 systemd-resolved[1463]: Flushed all caches. Jun 25 18:36:14.485733 systemd-journald[1129]: Under memory pressure, flushing caches. Jun 25 18:36:14.779739 kubelet[2417]: I0625 18:36:14.778466 2417 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:16.494426 kubelet[2417]: I0625 18:36:16.494353 2417 apiserver.go:52] "Watching apiserver" Jun 25 18:36:16.571530 kubelet[2417]: E0625 18:36:16.571478 2417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012-0-0-f-7092d20389.novalocal\" not found" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:16.634879 kubelet[2417]: I0625 18:36:16.634797 2417 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:36:16.645483 kubelet[2417]: I0625 18:36:16.645412 2417 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:16.815874 kubelet[2417]: E0625 18:36:16.815836 2417 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:19.223114 kubelet[2417]: W0625 18:36:19.222929 2417 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:36:19.501127 systemd[1]: Reloading requested from client PID 2694 ('systemctl') (unit session-11.scope)... Jun 25 18:36:19.501165 systemd[1]: Reloading... Jun 25 18:36:19.593743 zram_generator::config[2728]: No configuration found. Jun 25 18:36:19.787503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:36:19.878947 systemd[1]: Reloading finished in 376 ms. Jun 25 18:36:19.909391 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:19.920866 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:36:19.921300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:19.927375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:20.390090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:20.407441 (kubelet)[2805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:36:20.547463 kubelet[2805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:36:20.547463 kubelet[2805]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:36:20.547463 kubelet[2805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:36:20.549738 kubelet[2805]: I0625 18:36:20.547775 2805 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:36:20.557956 kubelet[2805]: I0625 18:36:20.557179 2805 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:36:20.558103 kubelet[2805]: I0625 18:36:20.558090 2805 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:36:20.558429 kubelet[2805]: I0625 18:36:20.558415 2805 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:36:20.561173 kubelet[2805]: I0625 18:36:20.561143 2805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:36:20.569862 kubelet[2805]: I0625 18:36:20.569833 2805 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:36:20.581245 kubelet[2805]: I0625 18:36:20.581222 2805 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:36:20.582982 kubelet[2805]: I0625 18:36:20.581844 2805 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:36:20.582982 kubelet[2805]: I0625 18:36:20.582038 2805 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:36:20.582982 kubelet[2805]: I0625 18:36:20.582057 2805 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:36:20.582982 kubelet[2805]: I0625 18:36:20.582069 2805 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:36:20.582982 kubelet[2805]: I0625 18:36:20.582103 2805 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:36:20.582982 kubelet[2805]: I0625 18:36:20.582188 2805 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:36:20.583269 kubelet[2805]: I0625 18:36:20.582203 2805 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:36:20.583269 kubelet[2805]: I0625 18:36:20.582225 2805 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:36:20.583269 kubelet[2805]: I0625 18:36:20.582239 2805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:36:20.584221 kubelet[2805]: I0625 18:36:20.584208 2805 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:36:20.584757 kubelet[2805]: I0625 18:36:20.584745 2805 server.go:1232] "Started kubelet" Jun 25 18:36:20.586417 kubelet[2805]: I0625 18:36:20.586402 2805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:36:20.596280 kubelet[2805]: I0625 18:36:20.596257 2805 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:36:20.597825 kubelet[2805]: I0625 18:36:20.597811 2805 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:36:20.599023 kubelet[2805]: I0625 18:36:20.599008 2805 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:36:20.599255 kubelet[2805]: I0625 18:36:20.599243 2805 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:36:20.601083 kubelet[2805]: I0625 18:36:20.601070 2805 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:36:20.603014 kubelet[2805]: I0625 18:36:20.602998 2805 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:36:20.603288 kubelet[2805]: I0625 18:36:20.603263 2805 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:36:20.606628 sudo[2819]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 18:36:20.607411 sudo[2819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 18:36:20.610880 kubelet[2805]: I0625 18:36:20.610765 2805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:36:20.613205 kubelet[2805]: I0625 18:36:20.612842 2805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:36:20.613205 kubelet[2805]: I0625 18:36:20.612860 2805 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:36:20.613205 kubelet[2805]: I0625 18:36:20.612879 2805 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:36:20.613205 kubelet[2805]: E0625 18:36:20.612918 2805 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:36:20.618951 kubelet[2805]: E0625 18:36:20.617230 2805 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:36:20.618951 kubelet[2805]: E0625 18:36:20.617285 2805 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:36:20.714029 kubelet[2805]: I0625 18:36:20.713942 2805 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:20.718047 kubelet[2805]: E0625 18:36:20.717988 2805 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:36:20.730014 kubelet[2805]: I0625 18:36:20.729089 2805 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:20.730014 kubelet[2805]: I0625 18:36:20.729155 2805 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:20.777070 kubelet[2805]: I0625 18:36:20.777038 2805 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:36:20.777070 kubelet[2805]: I0625 18:36:20.777060 2805 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:36:20.777070 kubelet[2805]: I0625 18:36:20.777075 2805 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:36:20.777245 kubelet[2805]: I0625 18:36:20.777219 2805 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:36:20.777245 kubelet[2805]: I0625 18:36:20.777242 2805 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:36:20.777297 kubelet[2805]: I0625 18:36:20.777249 2805 policy_none.go:49] "None policy: Start" Jun 25 18:36:20.778895 kubelet[2805]: I0625 18:36:20.777932 2805 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:36:20.778895 kubelet[2805]: I0625 18:36:20.777956 2805 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:36:20.778895 kubelet[2805]: I0625 18:36:20.778126 2805 state_mem.go:75] "Updated machine memory state" Jun 25 18:36:20.779782 kubelet[2805]: I0625 18:36:20.779759 2805 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:36:20.784748 kubelet[2805]: I0625 18:36:20.783183 2805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:36:20.919360 kubelet[2805]: I0625 18:36:20.919320 2805 topology_manager.go:215] "Topology Admit Handler" podUID="3f8f09154fcb7bd640b4de5f6f925e34" podNamespace="kube-system" podName="kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:20.919744 kubelet[2805]: I0625 18:36:20.919428 2805 topology_manager.go:215] "Topology Admit Handler" podUID="da5fdabcb89c1c773c31f7824d5f4c9a" podNamespace="kube-system" podName="kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:20.919744 kubelet[2805]: I0625 18:36:20.919470 2805 topology_manager.go:215] "Topology Admit Handler" podUID="c2c553804d446708aab7f0115bf622ff" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:20.931247 kubelet[2805]: W0625 18:36:20.930071 2805 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:36:20.932641 kubelet[2805]: E0625 18:36:20.930879 2805 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:20.932641 kubelet[2805]: W0625 18:36:20.931183 2805 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:36:20.932641 kubelet[2805]: W0625 18:36:20.931937 2805 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:36:21.004318 kubelet[2805]: I0625 18:36:21.004178 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da5fdabcb89c1c773c31f7824d5f4c9a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"da5fdabcb89c1c773c31f7824d5f4c9a\") " pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.004318 kubelet[2805]: I0625 18:36:21.004220 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-ca-certs\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.004318 kubelet[2805]: I0625 18:36:21.004248 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-kubeconfig\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.004759 kubelet[2805]: I0625 18:36:21.004553 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-k8s-certs\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.004759 kubelet[2805]: I0625 18:36:21.004632 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.004759 kubelet[2805]: I0625 18:36:21.004660 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f8f09154fcb7bd640b4de5f6f925e34-kubeconfig\") pod \"kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"3f8f09154fcb7bd640b4de5f6f925e34\") " pod="kube-system/kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.005059 kubelet[2805]: I0625 18:36:21.004922 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da5fdabcb89c1c773c31f7824d5f4c9a-ca-certs\") pod \"kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"da5fdabcb89c1c773c31f7824d5f4c9a\") " pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.005059 kubelet[2805]: I0625 18:36:21.004990 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da5fdabcb89c1c773c31f7824d5f4c9a-k8s-certs\") pod \"kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"da5fdabcb89c1c773c31f7824d5f4c9a\") " pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.005284 kubelet[2805]: I0625 18:36:21.005198 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2c553804d446708aab7f0115bf622ff-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal\" (UID: \"c2c553804d446708aab7f0115bf622ff\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.583808 kubelet[2805]: I0625 18:36:21.583776 2805 apiserver.go:52] "Watching apiserver" Jun 25 18:36:21.603528 kubelet[2805]: I0625 18:36:21.603471 2805 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:36:21.654054 sudo[2819]: pam_unix(sudo:session): session closed for user root Jun 25 18:36:21.693453 kubelet[2805]: W0625 18:36:21.693375 2805 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:36:21.693744 kubelet[2805]: E0625 18:36:21.693590 2805 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.695380 kubelet[2805]: W0625 18:36:21.695129 2805 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:36:21.695380 kubelet[2805]: E0625 18:36:21.695237 2805 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" Jun 25 18:36:21.735637 kubelet[2805]: I0625 18:36:21.735610 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012-0-0-f-7092d20389.novalocal" podStartSLOduration=1.731173549 podCreationTimestamp="2024-06-25 18:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:36:21.721818804 +0000 UTC m=+1.281767871" watchObservedRunningTime="2024-06-25 18:36:21.731173549 +0000 UTC m=+1.291122545" Jun 25 18:36:21.745315 kubelet[2805]: I0625 18:36:21.744833 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012-0-0-f-7092d20389.novalocal" podStartSLOduration=2.744793347 podCreationTimestamp="2024-06-25 18:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:36:21.735945058 +0000 UTC m=+1.295894064" watchObservedRunningTime="2024-06-25 18:36:21.744793347 +0000 UTC m=+1.304742353" Jun 25 18:36:21.745315 kubelet[2805]: I0625 18:36:21.744908 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012-0-0-f-7092d20389.novalocal" podStartSLOduration=1.7448886350000001 podCreationTimestamp="2024-06-25 18:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:36:21.744663234 +0000 UTC m=+1.304612260" watchObservedRunningTime="2024-06-25 18:36:21.744888635 +0000 UTC m=+1.304837631" Jun 25 18:36:24.690561 sudo[1863]: pam_unix(sudo:session): session closed for user root Jun 25 18:36:24.881700 sshd[1856]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:24.889055 systemd[1]: sshd@8-172.24.4.45:22-172.24.4.1:43476.service: Deactivated successfully. Jun 25 18:36:24.898269 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:36:24.904888 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:36:24.910520 systemd-logind[1559]: Removed session 11. Jun 25 18:36:33.654338 kubelet[2805]: I0625 18:36:33.654313 2805 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:36:33.655962 containerd[1577]: time="2024-06-25T18:36:33.655691566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:36:33.658212 kubelet[2805]: I0625 18:36:33.656006 2805 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:36:34.331771 kubelet[2805]: I0625 18:36:34.331421 2805 topology_manager.go:215] "Topology Admit Handler" podUID="045c6657-1430-4913-a0dd-fd0d234a65fd" podNamespace="kube-system" podName="kube-proxy-jwglt" Jun 25 18:36:34.345009 kubelet[2805]: I0625 18:36:34.342202 2805 topology_manager.go:215] "Topology Admit Handler" podUID="cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" podNamespace="kube-system" podName="cilium-vccdl" Jun 25 18:36:34.359667 kubelet[2805]: W0625 18:36:34.359638 2805 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4012-0-0-f-7092d20389.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-f-7092d20389.novalocal' and this object Jun 25 18:36:34.359878 kubelet[2805]: E0625 18:36:34.359865 2805 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4012-0-0-f-7092d20389.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-f-7092d20389.novalocal' and this object Jun 25 18:36:34.360032 kubelet[2805]: W0625 18:36:34.360005 2805 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4012-0-0-f-7092d20389.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-f-7092d20389.novalocal' and this object Jun 25 18:36:34.361800 kubelet[2805]: E0625 18:36:34.361772 2805 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4012-0-0-f-7092d20389.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-f-7092d20389.novalocal' and this object Jun 25 18:36:34.397925 kubelet[2805]: I0625 18:36:34.397867 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-cgroup\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.397925 kubelet[2805]: I0625 18:36:34.397919 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/045c6657-1430-4913-a0dd-fd0d234a65fd-kube-proxy\") pod \"kube-proxy-jwglt\" (UID: \"045c6657-1430-4913-a0dd-fd0d234a65fd\") " pod="kube-system/kube-proxy-jwglt" Jun 25 18:36:34.397925 kubelet[2805]: I0625 18:36:34.397947 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9vs\" (UniqueName: \"kubernetes.io/projected/045c6657-1430-4913-a0dd-fd0d234a65fd-kube-api-access-5n9vs\") pod \"kube-proxy-jwglt\" (UID: \"045c6657-1430-4913-a0dd-fd0d234a65fd\") " pod="kube-system/kube-proxy-jwglt" Jun 25 18:36:34.397925 kubelet[2805]: I0625 18:36:34.397972 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/045c6657-1430-4913-a0dd-fd0d234a65fd-xtables-lock\") pod \"kube-proxy-jwglt\" (UID: \"045c6657-1430-4913-a0dd-fd0d234a65fd\") " pod="kube-system/kube-proxy-jwglt" Jun 25 18:36:34.398951 kubelet[2805]: I0625 18:36:34.398008 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-host-proc-sys-net\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.398951 kubelet[2805]: I0625 18:36:34.398034 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-hubble-tls\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.398951 kubelet[2805]: I0625 18:36:34.398059 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88svp\" (UniqueName: \"kubernetes.io/projected/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-kube-api-access-88svp\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.398951 kubelet[2805]: I0625 18:36:34.398088 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/045c6657-1430-4913-a0dd-fd0d234a65fd-lib-modules\") pod \"kube-proxy-jwglt\" (UID: \"045c6657-1430-4913-a0dd-fd0d234a65fd\") " pod="kube-system/kube-proxy-jwglt" Jun 25 18:36:34.398951 kubelet[2805]: I0625 18:36:34.398119 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cni-path\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.398951 kubelet[2805]: I0625 18:36:34.398143 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-etc-cni-netd\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.399110 kubelet[2805]: I0625 18:36:34.398168 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-xtables-lock\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.399110 kubelet[2805]: I0625 18:36:34.398192 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-lib-modules\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.399110 kubelet[2805]: I0625 18:36:34.398216 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-clustermesh-secrets\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.399110 kubelet[2805]: I0625 18:36:34.398241 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-run\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.399110 kubelet[2805]: I0625 18:36:34.398267 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-config-path\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.399110 kubelet[2805]: I0625 18:36:34.398295 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-host-proc-sys-kernel\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.399268 kubelet[2805]: I0625 18:36:34.398323 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-bpf-maps\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.399268 kubelet[2805]: I0625 18:36:34.398347 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-hostproc\") pod \"cilium-vccdl\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " pod="kube-system/cilium-vccdl" Jun 25 18:36:34.577542 kubelet[2805]: I0625 18:36:34.574375 2805 topology_manager.go:215] "Topology Admit Handler" podUID="82ae9dfd-4054-4945-99d0-b9ee6c0b881f" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-sghkr" Jun 25 18:36:34.600532 kubelet[2805]: I0625 18:36:34.600436 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrww\" (UniqueName: \"kubernetes.io/projected/82ae9dfd-4054-4945-99d0-b9ee6c0b881f-kube-api-access-jtrww\") pod \"cilium-operator-6bc8ccdb58-sghkr\" (UID: \"82ae9dfd-4054-4945-99d0-b9ee6c0b881f\") " pod="kube-system/cilium-operator-6bc8ccdb58-sghkr" Jun 25 18:36:34.600788 kubelet[2805]: I0625 18:36:34.600774 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ae9dfd-4054-4945-99d0-b9ee6c0b881f-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-sghkr\" (UID: \"82ae9dfd-4054-4945-99d0-b9ee6c0b881f\") " pod="kube-system/cilium-operator-6bc8ccdb58-sghkr" Jun 25 18:36:34.653472 containerd[1577]: time="2024-06-25T18:36:34.653420104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jwglt,Uid:045c6657-1430-4913-a0dd-fd0d234a65fd,Namespace:kube-system,Attempt:0,}" Jun 25 18:36:34.700604 containerd[1577]: time="2024-06-25T18:36:34.700134735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:36:34.700604 containerd[1577]: time="2024-06-25T18:36:34.700203454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:34.700604 containerd[1577]: time="2024-06-25T18:36:34.700228921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:36:34.700604 containerd[1577]: time="2024-06-25T18:36:34.700325512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:34.759249 containerd[1577]: time="2024-06-25T18:36:34.759093648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jwglt,Uid:045c6657-1430-4913-a0dd-fd0d234a65fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b7fd0e1e436d5e7c59c319d4a61de8f74aa4b5671800866d1598e3d5392548d\"" Jun 25 18:36:34.763160 containerd[1577]: time="2024-06-25T18:36:34.763121592Z" level=info msg="CreateContainer within sandbox \"4b7fd0e1e436d5e7c59c319d4a61de8f74aa4b5671800866d1598e3d5392548d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:36:34.844756 containerd[1577]: time="2024-06-25T18:36:34.844687602Z" level=info msg="CreateContainer within sandbox \"4b7fd0e1e436d5e7c59c319d4a61de8f74aa4b5671800866d1598e3d5392548d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87f9a3d67e6a3abc435f725b1f12f838957f7eade2490ac44a5357b0b89410ce\"" Jun 25 18:36:34.845261 containerd[1577]: time="2024-06-25T18:36:34.845234015Z" level=info msg="StartContainer for \"87f9a3d67e6a3abc435f725b1f12f838957f7eade2490ac44a5357b0b89410ce\"" Jun 25 18:36:34.939865 containerd[1577]: time="2024-06-25T18:36:34.939255092Z" level=info msg="StartContainer for \"87f9a3d67e6a3abc435f725b1f12f838957f7eade2490ac44a5357b0b89410ce\" returns successfully" Jun 25 18:36:35.502708 kubelet[2805]: E0625 18:36:35.502633 2805 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jun 25 18:36:35.513378 kubelet[2805]: E0625 18:36:35.513327 2805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-config-path podName:cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7 nodeName:}" failed. No retries permitted until 2024-06-25 18:36:36.002814097 +0000 UTC m=+15.562763144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-config-path") pod "cilium-vccdl" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7") : failed to sync configmap cache: timed out waiting for the condition Jun 25 18:36:35.750155 kubelet[2805]: I0625 18:36:35.749082 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jwglt" podStartSLOduration=1.7490365049999999 podCreationTimestamp="2024-06-25 18:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:36:35.748036855 +0000 UTC m=+15.307985881" watchObservedRunningTime="2024-06-25 18:36:35.749036505 +0000 UTC m=+15.308985501" Jun 25 18:36:35.790806 containerd[1577]: time="2024-06-25T18:36:35.789524801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-sghkr,Uid:82ae9dfd-4054-4945-99d0-b9ee6c0b881f,Namespace:kube-system,Attempt:0,}" Jun 25 18:36:35.847008 containerd[1577]: time="2024-06-25T18:36:35.846882233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:36:35.847008 containerd[1577]: time="2024-06-25T18:36:35.846952504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:35.847008 containerd[1577]: time="2024-06-25T18:36:35.846986177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:36:35.847008 containerd[1577]: time="2024-06-25T18:36:35.847001977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:35.939987 containerd[1577]: time="2024-06-25T18:36:35.939947769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-sghkr,Uid:82ae9dfd-4054-4945-99d0-b9ee6c0b881f,Namespace:kube-system,Attempt:0,} returns sandbox id \"35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0\"" Jun 25 18:36:35.943800 containerd[1577]: time="2024-06-25T18:36:35.942409243Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 18:36:36.155964 containerd[1577]: time="2024-06-25T18:36:36.154171911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vccdl,Uid:cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7,Namespace:kube-system,Attempt:0,}" Jun 25 18:36:36.226839 containerd[1577]: time="2024-06-25T18:36:36.225355018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:36:36.226839 containerd[1577]: time="2024-06-25T18:36:36.226349008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:36.226839 containerd[1577]: time="2024-06-25T18:36:36.226399162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:36:36.226839 containerd[1577]: time="2024-06-25T18:36:36.226428787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:36:36.308103 containerd[1577]: time="2024-06-25T18:36:36.308049570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vccdl,Uid:cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\"" Jun 25 18:36:38.956558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559623873.mount: Deactivated successfully. Jun 25 18:36:40.001764 containerd[1577]: time="2024-06-25T18:36:39.999658754Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:40.001764 containerd[1577]: time="2024-06-25T18:36:40.000831058Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Jun 25 18:36:40.003491 containerd[1577]: time="2024-06-25T18:36:40.003427225Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:40.010659 containerd[1577]: time="2024-06-25T18:36:40.010193192Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.067726822s" Jun 25 18:36:40.010659 containerd[1577]: time="2024-06-25T18:36:40.010270567Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 25 18:36:40.012648 containerd[1577]: time="2024-06-25T18:36:40.012334679Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 18:36:40.023616 containerd[1577]: time="2024-06-25T18:36:40.023191139Z" level=info msg="CreateContainer within sandbox \"35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 18:36:40.055670 containerd[1577]: time="2024-06-25T18:36:40.055581326Z" level=info msg="CreateContainer within sandbox \"35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\"" Jun 25 18:36:40.056657 containerd[1577]: time="2024-06-25T18:36:40.056570417Z" level=info msg="StartContainer for \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\"" Jun 25 18:36:40.130267 containerd[1577]: time="2024-06-25T18:36:40.130226015Z" level=info msg="StartContainer for \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\" returns successfully" Jun 25 18:36:40.468767 systemd-resolved[1463]: Under memory pressure, flushing caches. Jun 25 18:36:40.475594 systemd-journald[1129]: Under memory pressure, flushing caches. Jun 25 18:36:40.468804 systemd-resolved[1463]: Flushed all caches. Jun 25 18:36:46.455604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877351718.mount: Deactivated successfully. Jun 25 18:36:50.515801 systemd-resolved[1463]: Under memory pressure, flushing caches. Jun 25 18:36:50.517010 systemd-journald[1129]: Under memory pressure, flushing caches. Jun 25 18:36:50.515849 systemd-resolved[1463]: Flushed all caches. Jun 25 18:36:51.122338 containerd[1577]: time="2024-06-25T18:36:51.085239597Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735259" Jun 25 18:36:51.122338 containerd[1577]: time="2024-06-25T18:36:51.081533087Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:51.124506 containerd[1577]: time="2024-06-25T18:36:51.124444393Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:51.126655 containerd[1577]: time="2024-06-25T18:36:51.126539210Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.114142826s" Jun 25 18:36:51.126655 containerd[1577]: time="2024-06-25T18:36:51.126577182Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 25 18:36:51.129587 containerd[1577]: time="2024-06-25T18:36:51.129425566Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:36:51.180538 containerd[1577]: time="2024-06-25T18:36:51.179189198Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\"" Jun 25 18:36:51.182971 containerd[1577]: time="2024-06-25T18:36:51.181911904Z" level=info msg="StartContainer for \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\"" Jun 25 18:36:51.985055 containerd[1577]: time="2024-06-25T18:36:51.984982604Z" level=info msg="StartContainer for \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\" returns successfully" Jun 25 18:36:52.031050 kubelet[2805]: I0625 18:36:52.030968 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-sghkr" podStartSLOduration=13.960896812 podCreationTimestamp="2024-06-25 18:36:34 +0000 UTC" firstStartedPulling="2024-06-25 18:36:35.941702562 +0000 UTC m=+15.501651568" lastFinishedPulling="2024-06-25 18:36:40.011633427 +0000 UTC m=+19.571582523" observedRunningTime="2024-06-25 18:36:40.958949016 +0000 UTC m=+20.518898032" watchObservedRunningTime="2024-06-25 18:36:52.030827767 +0000 UTC m=+31.590776773" Jun 25 18:36:52.108582 containerd[1577]: time="2024-06-25T18:36:52.092361272Z" level=info msg="shim disconnected" id=18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c namespace=k8s.io Jun 25 18:36:52.108582 containerd[1577]: time="2024-06-25T18:36:52.108480067Z" level=warning msg="cleaning up after shim disconnected" id=18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c namespace=k8s.io Jun 25 18:36:52.108582 containerd[1577]: time="2024-06-25T18:36:52.108495908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:36:52.171094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c-rootfs.mount: Deactivated successfully. Jun 25 18:36:52.565260 systemd-journald[1129]: Under memory pressure, flushing caches. Jun 25 18:36:52.563324 systemd-resolved[1463]: Under memory pressure, flushing caches. Jun 25 18:36:52.563357 systemd-resolved[1463]: Flushed all caches. Jun 25 18:36:53.002321 containerd[1577]: time="2024-06-25T18:36:53.000068042Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:36:53.239995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271631920.mount: Deactivated successfully. Jun 25 18:36:53.255882 containerd[1577]: time="2024-06-25T18:36:53.255260080Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\"" Jun 25 18:36:53.259558 containerd[1577]: time="2024-06-25T18:36:53.257830124Z" level=info msg="StartContainer for \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\"" Jun 25 18:36:53.331339 systemd[1]: run-containerd-runc-k8s.io-0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280-runc.HsmDcw.mount: Deactivated successfully. Jun 25 18:36:53.426320 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:36:53.427054 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:36:53.427162 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:36:53.436315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:36:53.553167 containerd[1577]: time="2024-06-25T18:36:53.552633981Z" level=info msg="StartContainer for \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\" returns successfully" Jun 25 18:36:53.599684 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:36:53.615035 containerd[1577]: time="2024-06-25T18:36:53.614642648Z" level=info msg="shim disconnected" id=0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280 namespace=k8s.io Jun 25 18:36:53.615035 containerd[1577]: time="2024-06-25T18:36:53.614773226Z" level=warning msg="cleaning up after shim disconnected" id=0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280 namespace=k8s.io Jun 25 18:36:53.615035 containerd[1577]: time="2024-06-25T18:36:53.614793925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:36:54.020767 containerd[1577]: time="2024-06-25T18:36:54.015738335Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:36:54.109434 containerd[1577]: time="2024-06-25T18:36:54.109323402Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\"" Jun 25 18:36:54.110891 containerd[1577]: time="2024-06-25T18:36:54.110870019Z" level=info msg="StartContainer for \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\"" Jun 25 18:36:54.191383 containerd[1577]: time="2024-06-25T18:36:54.191284756Z" level=info msg="StartContainer for \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\" returns successfully" Jun 25 18:36:54.230801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280-rootfs.mount: Deactivated successfully. Jun 25 18:36:54.249210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7-rootfs.mount: Deactivated successfully. Jun 25 18:36:54.261269 containerd[1577]: time="2024-06-25T18:36:54.261204637Z" level=info msg="shim disconnected" id=542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7 namespace=k8s.io Jun 25 18:36:54.262534 containerd[1577]: time="2024-06-25T18:36:54.261271384Z" level=warning msg="cleaning up after shim disconnected" id=542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7 namespace=k8s.io Jun 25 18:36:54.262534 containerd[1577]: time="2024-06-25T18:36:54.261285390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:36:54.275364 containerd[1577]: time="2024-06-25T18:36:54.275218572Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:36:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:36:55.021046 containerd[1577]: time="2024-06-25T18:36:55.020957056Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:36:55.079784 containerd[1577]: time="2024-06-25T18:36:55.078451225Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\"" Jun 25 18:36:55.090903 containerd[1577]: time="2024-06-25T18:36:55.089235834Z" level=info msg="StartContainer for \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\"" Jun 25 18:36:55.155347 containerd[1577]: time="2024-06-25T18:36:55.155310344Z" level=info msg="StartContainer for \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\" returns successfully" Jun 25 18:36:55.182393 containerd[1577]: time="2024-06-25T18:36:55.182324340Z" level=info msg="shim disconnected" id=5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd namespace=k8s.io Jun 25 18:36:55.182393 containerd[1577]: time="2024-06-25T18:36:55.182390325Z" level=warning msg="cleaning up after shim disconnected" id=5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd namespace=k8s.io Jun 25 18:36:55.182648 containerd[1577]: time="2024-06-25T18:36:55.182405113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:36:55.232098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd-rootfs.mount: Deactivated successfully. Jun 25 18:36:56.035107 containerd[1577]: time="2024-06-25T18:36:56.034999888Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:36:56.088580 containerd[1577]: time="2024-06-25T18:36:56.088452966Z" level=info msg="CreateContainer within sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\"" Jun 25 18:36:56.090629 containerd[1577]: time="2024-06-25T18:36:56.089958703Z" level=info msg="StartContainer for \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\"" Jun 25 18:36:56.178053 containerd[1577]: time="2024-06-25T18:36:56.178021275Z" level=info msg="StartContainer for \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\" returns successfully" Jun 25 18:36:56.376553 kubelet[2805]: I0625 18:36:56.376354 2805 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:36:56.412820 kubelet[2805]: I0625 18:36:56.410750 2805 topology_manager.go:215] "Topology Admit Handler" podUID="4d15c5ae-a294-44c7-a30c-f7c0df7417a3" podNamespace="kube-system" podName="coredns-5dd5756b68-lxlnf" Jun 25 18:36:56.416261 kubelet[2805]: I0625 18:36:56.416195 2805 topology_manager.go:215] "Topology Admit Handler" podUID="5fdd6ccd-a763-4390-835a-cb573b7e7e7e" podNamespace="kube-system" podName="coredns-5dd5756b68-tvs25" Jun 25 18:36:56.516910 kubelet[2805]: I0625 18:36:56.516635 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d15c5ae-a294-44c7-a30c-f7c0df7417a3-config-volume\") pod \"coredns-5dd5756b68-lxlnf\" (UID: \"4d15c5ae-a294-44c7-a30c-f7c0df7417a3\") " pod="kube-system/coredns-5dd5756b68-lxlnf" Jun 25 18:36:56.516910 kubelet[2805]: I0625 18:36:56.516684 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fdd6ccd-a763-4390-835a-cb573b7e7e7e-config-volume\") pod \"coredns-5dd5756b68-tvs25\" (UID: \"5fdd6ccd-a763-4390-835a-cb573b7e7e7e\") " pod="kube-system/coredns-5dd5756b68-tvs25" Jun 25 18:36:56.516910 kubelet[2805]: I0625 18:36:56.516721 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbzjf\" (UniqueName: \"kubernetes.io/projected/5fdd6ccd-a763-4390-835a-cb573b7e7e7e-kube-api-access-qbzjf\") pod \"coredns-5dd5756b68-tvs25\" (UID: \"5fdd6ccd-a763-4390-835a-cb573b7e7e7e\") " pod="kube-system/coredns-5dd5756b68-tvs25" Jun 25 18:36:56.516910 kubelet[2805]: I0625 18:36:56.516757 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsprr\" (UniqueName: \"kubernetes.io/projected/4d15c5ae-a294-44c7-a30c-f7c0df7417a3-kube-api-access-jsprr\") pod \"coredns-5dd5756b68-lxlnf\" (UID: \"4d15c5ae-a294-44c7-a30c-f7c0df7417a3\") " pod="kube-system/coredns-5dd5756b68-lxlnf" Jun 25 18:36:56.738809 containerd[1577]: time="2024-06-25T18:36:56.738322922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tvs25,Uid:5fdd6ccd-a763-4390-835a-cb573b7e7e7e,Namespace:kube-system,Attempt:0,}" Jun 25 18:36:56.743770 containerd[1577]: time="2024-06-25T18:36:56.743738642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lxlnf,Uid:4d15c5ae-a294-44c7-a30c-f7c0df7417a3,Namespace:kube-system,Attempt:0,}" Jun 25 18:36:57.067650 kubelet[2805]: I0625 18:36:57.067295 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vccdl" podStartSLOduration=8.24975652 podCreationTimestamp="2024-06-25 18:36:34 +0000 UTC" firstStartedPulling="2024-06-25 18:36:36.309363849 +0000 UTC m=+15.869312845" lastFinishedPulling="2024-06-25 18:36:51.126813649 +0000 UTC m=+30.686762656" observedRunningTime="2024-06-25 18:36:57.066919038 +0000 UTC m=+36.626868114" watchObservedRunningTime="2024-06-25 18:36:57.067206331 +0000 UTC m=+36.627155387" Jun 25 18:36:58.370671 systemd-networkd[1205]: cilium_host: Link UP Jun 25 18:36:58.373899 systemd-networkd[1205]: cilium_net: Link UP Jun 25 18:36:58.373922 systemd-networkd[1205]: cilium_net: Gained carrier Jun 25 18:36:58.375966 systemd-networkd[1205]: cilium_host: Gained carrier Jun 25 18:36:58.493802 systemd-networkd[1205]: cilium_vxlan: Link UP Jun 25 18:36:58.493810 systemd-networkd[1205]: cilium_vxlan: Gained carrier Jun 25 18:36:58.962950 systemd-networkd[1205]: cilium_net: Gained IPv6LL Jun 25 18:36:59.166898 kernel: NET: Registered PF_ALG protocol family Jun 25 18:36:59.218937 systemd-networkd[1205]: cilium_host: Gained IPv6LL Jun 25 18:37:00.033684 systemd-networkd[1205]: lxc_health: Link UP Jun 25 18:37:00.036424 systemd-networkd[1205]: lxc_health: Gained carrier Jun 25 18:37:00.354366 systemd-networkd[1205]: lxc9b985b415257: Link UP Jun 25 18:37:00.359770 kernel: eth0: renamed from tmpbbbc5 Jun 25 18:37:00.365734 systemd-networkd[1205]: lxc9b985b415257: Gained carrier Jun 25 18:37:00.373274 systemd-networkd[1205]: cilium_vxlan: Gained IPv6LL Jun 25 18:37:00.390068 systemd-networkd[1205]: lxcad7eb543aaef: Link UP Jun 25 18:37:00.398853 kernel: eth0: renamed from tmp1f665 Jun 25 18:37:00.414822 systemd-networkd[1205]: lxcad7eb543aaef: Gained carrier Jun 25 18:37:01.267831 systemd-networkd[1205]: lxc_health: Gained IPv6LL Jun 25 18:37:01.970905 systemd-networkd[1205]: lxc9b985b415257: Gained IPv6LL Jun 25 18:37:02.099837 systemd-networkd[1205]: lxcad7eb543aaef: Gained IPv6LL Jun 25 18:37:05.083844 containerd[1577]: time="2024-06-25T18:37:05.083256512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:05.083844 containerd[1577]: time="2024-06-25T18:37:05.083348867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:05.083844 containerd[1577]: time="2024-06-25T18:37:05.083378323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:05.083844 containerd[1577]: time="2024-06-25T18:37:05.083396877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:05.144192 containerd[1577]: time="2024-06-25T18:37:05.142127744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:05.144192 containerd[1577]: time="2024-06-25T18:37:05.143641691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:05.144192 containerd[1577]: time="2024-06-25T18:37:05.143739876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:05.144192 containerd[1577]: time="2024-06-25T18:37:05.143824596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:05.248685 containerd[1577]: time="2024-06-25T18:37:05.248610803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lxlnf,Uid:4d15c5ae-a294-44c7-a30c-f7c0df7417a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbbc55e0dec2cba7fb30913cc4c3f2e0ff7fb88dc3029c373840133c2fce7b4c\"" Jun 25 18:37:05.257830 containerd[1577]: time="2024-06-25T18:37:05.257036681Z" level=info msg="CreateContainer within sandbox \"bbbc55e0dec2cba7fb30913cc4c3f2e0ff7fb88dc3029c373840133c2fce7b4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:37:05.263699 containerd[1577]: time="2024-06-25T18:37:05.262019543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tvs25,Uid:5fdd6ccd-a763-4390-835a-cb573b7e7e7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f665a8dc37f0d372483d5298eadd21195a9299d825303b11667b2af994fd33d\"" Jun 25 18:37:05.266305 containerd[1577]: time="2024-06-25T18:37:05.266251839Z" level=info msg="CreateContainer within sandbox \"1f665a8dc37f0d372483d5298eadd21195a9299d825303b11667b2af994fd33d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:37:05.293824 containerd[1577]: time="2024-06-25T18:37:05.293671005Z" level=info msg="CreateContainer within sandbox \"bbbc55e0dec2cba7fb30913cc4c3f2e0ff7fb88dc3029c373840133c2fce7b4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8851542a59cbe8f210f31ab350ca13a7bf48104bfc47410c7be8c1b07367ef7b\"" Jun 25 18:37:05.294885 containerd[1577]: time="2024-06-25T18:37:05.294347081Z" level=info msg="StartContainer for \"8851542a59cbe8f210f31ab350ca13a7bf48104bfc47410c7be8c1b07367ef7b\"" Jun 25 18:37:05.303460 containerd[1577]: time="2024-06-25T18:37:05.303407917Z" level=info msg="CreateContainer within sandbox \"1f665a8dc37f0d372483d5298eadd21195a9299d825303b11667b2af994fd33d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e10448f1c38dca2da1f607e6194721afaf9c25ceaeec2663ef2b7d8a619d9c3e\"" Jun 25 18:37:05.305808 containerd[1577]: time="2024-06-25T18:37:05.305461412Z" level=info msg="StartContainer for \"e10448f1c38dca2da1f607e6194721afaf9c25ceaeec2663ef2b7d8a619d9c3e\"" Jun 25 18:37:05.363783 containerd[1577]: time="2024-06-25T18:37:05.363030246Z" level=info msg="StartContainer for \"8851542a59cbe8f210f31ab350ca13a7bf48104bfc47410c7be8c1b07367ef7b\" returns successfully" Jun 25 18:37:05.389062 containerd[1577]: time="2024-06-25T18:37:05.389006199Z" level=info msg="StartContainer for \"e10448f1c38dca2da1f607e6194721afaf9c25ceaeec2663ef2b7d8a619d9c3e\" returns successfully" Jun 25 18:37:06.125257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631824602.mount: Deactivated successfully. Jun 25 18:37:06.145784 kubelet[2805]: I0625 18:37:06.145334 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lxlnf" podStartSLOduration=32.145290294 podCreationTimestamp="2024-06-25 18:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:06.139000458 +0000 UTC m=+45.698949484" watchObservedRunningTime="2024-06-25 18:37:06.145290294 +0000 UTC m=+45.705239310" Jun 25 18:37:06.765904 kubelet[2805]: I0625 18:37:06.765408 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-tvs25" podStartSLOduration=32.765330763 podCreationTimestamp="2024-06-25 18:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:06.161148671 +0000 UTC m=+45.721097677" watchObservedRunningTime="2024-06-25 18:37:06.765330763 +0000 UTC m=+46.325279809" Jun 25 18:37:32.655257 systemd[1]: Started sshd@9-172.24.4.45:22-172.24.4.1:41540.service - OpenSSH per-connection server daemon (172.24.4.1:41540). Jun 25 18:37:33.877201 sshd[4165]: Accepted publickey for core from 172.24.4.1 port 41540 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:37:33.880670 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:37:33.894158 systemd-logind[1559]: New session 12 of user core. Jun 25 18:37:33.903384 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:37:35.899327 sshd[4165]: pam_unix(sshd:session): session closed for user core Jun 25 18:37:35.906196 systemd[1]: sshd@9-172.24.4.45:22-172.24.4.1:41540.service: Deactivated successfully. Jun 25 18:37:35.916236 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:37:35.921298 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:37:35.924197 systemd-logind[1559]: Removed session 12. Jun 25 18:37:40.915322 systemd[1]: Started sshd@10-172.24.4.45:22-172.24.4.1:57134.service - OpenSSH per-connection server daemon (172.24.4.1:57134). Jun 25 18:37:42.270489 sshd[4182]: Accepted publickey for core from 172.24.4.1 port 57134 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:37:42.273125 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:37:42.283421 systemd-logind[1559]: New session 13 of user core. Jun 25 18:37:42.293255 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:37:43.134491 sshd[4182]: pam_unix(sshd:session): session closed for user core Jun 25 18:37:43.144249 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:37:43.146244 systemd[1]: sshd@10-172.24.4.45:22-172.24.4.1:57134.service: Deactivated successfully. Jun 25 18:37:43.160112 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:37:43.165273 systemd-logind[1559]: Removed session 13. Jun 25 18:37:48.136552 systemd[1]: Started sshd@11-172.24.4.45:22-172.24.4.1:55050.service - OpenSSH per-connection server daemon (172.24.4.1:55050). Jun 25 18:37:49.518248 sshd[4197]: Accepted publickey for core from 172.24.4.1 port 55050 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:37:49.521229 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:37:49.530650 systemd-logind[1559]: New session 14 of user core. Jun 25 18:37:49.537175 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:37:50.202030 sshd[4197]: pam_unix(sshd:session): session closed for user core Jun 25 18:37:50.217415 systemd[1]: Started sshd@12-172.24.4.45:22-172.24.4.1:55056.service - OpenSSH per-connection server daemon (172.24.4.1:55056). Jun 25 18:37:50.219923 systemd[1]: sshd@11-172.24.4.45:22-172.24.4.1:55050.service: Deactivated successfully. Jun 25 18:37:50.231409 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:37:50.232127 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:37:50.240278 systemd-logind[1559]: Removed session 14. Jun 25 18:37:51.356605 sshd[4209]: Accepted publickey for core from 172.24.4.1 port 55056 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:37:51.360193 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:37:51.373371 systemd-logind[1559]: New session 15 of user core. Jun 25 18:37:51.382792 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:37:53.853310 systemd[1]: Started sshd@13-172.24.4.45:22-172.24.4.1:55058.service - OpenSSH per-connection server daemon (172.24.4.1:55058). Jun 25 18:37:53.861608 sshd[4209]: pam_unix(sshd:session): session closed for user core Jun 25 18:37:53.986259 systemd[1]: sshd@12-172.24.4.45:22-172.24.4.1:55056.service: Deactivated successfully. Jun 25 18:37:53.994834 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:37:53.995424 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:37:53.998629 systemd-logind[1559]: Removed session 15. Jun 25 18:37:54.451229 systemd-resolved[1463]: Under memory pressure, flushing caches. Jun 25 18:37:54.455435 systemd-journald[1129]: Under memory pressure, flushing caches. Jun 25 18:37:54.451293 systemd-resolved[1463]: Flushed all caches. Jun 25 18:37:55.364866 sshd[4222]: Accepted publickey for core from 172.24.4.1 port 55058 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:37:55.368274 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:37:55.380278 systemd-logind[1559]: New session 16 of user core. Jun 25 18:37:55.387349 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:37:56.128969 sshd[4222]: pam_unix(sshd:session): session closed for user core Jun 25 18:37:56.135453 systemd[1]: sshd@13-172.24.4.45:22-172.24.4.1:55058.service: Deactivated successfully. Jun 25 18:37:56.144394 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:37:56.149556 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:37:56.151930 systemd-logind[1559]: Removed session 16. Jun 25 18:37:56.502625 systemd-journald[1129]: Under memory pressure, flushing caches. Jun 25 18:37:56.500255 systemd-resolved[1463]: Under memory pressure, flushing caches. Jun 25 18:37:56.500272 systemd-resolved[1463]: Flushed all caches. Jun 25 18:38:01.142223 systemd[1]: Started sshd@14-172.24.4.45:22-172.24.4.1:45612.service - OpenSSH per-connection server daemon (172.24.4.1:45612). Jun 25 18:38:02.265310 sshd[4238]: Accepted publickey for core from 172.24.4.1 port 45612 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:02.266768 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:02.274626 systemd-logind[1559]: New session 17 of user core. Jun 25 18:38:02.281308 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:38:03.220184 sshd[4238]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:03.229032 systemd[1]: sshd@14-172.24.4.45:22-172.24.4.1:45612.service: Deactivated successfully. Jun 25 18:38:03.237489 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:38:03.240797 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:38:03.243646 systemd-logind[1559]: Removed session 17. Jun 25 18:38:08.235461 systemd[1]: Started sshd@15-172.24.4.45:22-172.24.4.1:48094.service - OpenSSH per-connection server daemon (172.24.4.1:48094). Jun 25 18:38:09.459204 sshd[4254]: Accepted publickey for core from 172.24.4.1 port 48094 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:09.462636 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:09.475847 systemd-logind[1559]: New session 18 of user core. Jun 25 18:38:09.484473 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:38:10.181139 sshd[4254]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:10.196123 systemd[1]: Started sshd@16-172.24.4.45:22-172.24.4.1:48108.service - OpenSSH per-connection server daemon (172.24.4.1:48108). Jun 25 18:38:10.199302 systemd[1]: sshd@15-172.24.4.45:22-172.24.4.1:48094.service: Deactivated successfully. Jun 25 18:38:10.212457 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:38:10.221531 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:38:10.224678 systemd-logind[1559]: Removed session 18. Jun 25 18:38:11.606593 sshd[4266]: Accepted publickey for core from 172.24.4.1 port 48108 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:11.610493 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:11.619297 systemd-logind[1559]: New session 19 of user core. Jun 25 18:38:11.623759 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:38:12.976116 sshd[4266]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:12.985442 systemd[1]: Started sshd@17-172.24.4.45:22-172.24.4.1:48114.service - OpenSSH per-connection server daemon (172.24.4.1:48114). Jun 25 18:38:12.998047 systemd[1]: sshd@16-172.24.4.45:22-172.24.4.1:48108.service: Deactivated successfully. Jun 25 18:38:13.008486 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:38:13.012151 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:38:13.016027 systemd-logind[1559]: Removed session 19. Jun 25 18:38:14.262675 sshd[4278]: Accepted publickey for core from 172.24.4.1 port 48114 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:14.265518 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:14.276179 systemd-logind[1559]: New session 20 of user core. Jun 25 18:38:14.284762 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:38:16.166921 sshd[4278]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:16.176990 systemd[1]: Started sshd@18-172.24.4.45:22-172.24.4.1:59026.service - OpenSSH per-connection server daemon (172.24.4.1:59026). Jun 25 18:38:16.177458 systemd[1]: sshd@17-172.24.4.45:22-172.24.4.1:48114.service: Deactivated successfully. Jun 25 18:38:16.185374 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:38:16.185530 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:38:16.190843 systemd-logind[1559]: Removed session 20. Jun 25 18:38:17.484028 sshd[4297]: Accepted publickey for core from 172.24.4.1 port 59026 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:17.486050 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:17.496982 systemd-logind[1559]: New session 21 of user core. Jun 25 18:38:17.505205 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:38:18.953177 sshd[4297]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:18.968342 systemd[1]: Started sshd@19-172.24.4.45:22-172.24.4.1:59036.service - OpenSSH per-connection server daemon (172.24.4.1:59036). Jun 25 18:38:18.969404 systemd[1]: sshd@18-172.24.4.45:22-172.24.4.1:59026.service: Deactivated successfully. Jun 25 18:38:18.981553 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:38:18.986072 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:38:18.989639 systemd-logind[1559]: Removed session 21. Jun 25 18:38:20.024951 sshd[4309]: Accepted publickey for core from 172.24.4.1 port 59036 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:20.028841 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:20.041343 systemd-logind[1559]: New session 22 of user core. Jun 25 18:38:20.051426 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:38:20.745934 sshd[4309]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:20.756704 systemd[1]: sshd@19-172.24.4.45:22-172.24.4.1:59036.service: Deactivated successfully. Jun 25 18:38:20.757253 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:38:20.765283 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:38:20.770312 systemd-logind[1559]: Removed session 22. Jun 25 18:38:25.758359 systemd[1]: Started sshd@20-172.24.4.45:22-172.24.4.1:56690.service - OpenSSH per-connection server daemon (172.24.4.1:56690). Jun 25 18:38:26.952084 sshd[4331]: Accepted publickey for core from 172.24.4.1 port 56690 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:26.954199 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:26.965548 systemd-logind[1559]: New session 23 of user core. Jun 25 18:38:26.975435 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:38:27.583651 sshd[4331]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:27.594409 systemd[1]: sshd@20-172.24.4.45:22-172.24.4.1:56690.service: Deactivated successfully. Jun 25 18:38:27.607445 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:38:27.610485 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:38:27.614410 systemd-logind[1559]: Removed session 23. Jun 25 18:38:32.595478 systemd[1]: Started sshd@21-172.24.4.45:22-172.24.4.1:56702.service - OpenSSH per-connection server daemon (172.24.4.1:56702). Jun 25 18:38:34.112482 sshd[4344]: Accepted publickey for core from 172.24.4.1 port 56702 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:34.115301 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:34.126588 systemd-logind[1559]: New session 24 of user core. Jun 25 18:38:34.131315 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:38:34.826606 sshd[4344]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:34.833241 systemd[1]: sshd@21-172.24.4.45:22-172.24.4.1:56702.service: Deactivated successfully. Jun 25 18:38:34.847282 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:38:34.850684 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:38:34.854153 systemd-logind[1559]: Removed session 24. Jun 25 18:38:39.839566 systemd[1]: Started sshd@22-172.24.4.45:22-172.24.4.1:51302.service - OpenSSH per-connection server daemon (172.24.4.1:51302). Jun 25 18:38:41.906468 sshd[4360]: Accepted publickey for core from 172.24.4.1 port 51302 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:41.909509 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:41.920173 systemd-logind[1559]: New session 25 of user core. Jun 25 18:38:41.926855 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:38:42.764174 sshd[4360]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:42.779477 systemd[1]: Started sshd@23-172.24.4.45:22-172.24.4.1:51306.service - OpenSSH per-connection server daemon (172.24.4.1:51306). Jun 25 18:38:42.780621 systemd[1]: sshd@22-172.24.4.45:22-172.24.4.1:51302.service: Deactivated successfully. Jun 25 18:38:42.793974 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:38:42.800639 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:38:42.805222 systemd-logind[1559]: Removed session 25. Jun 25 18:38:44.435311 sshd[4371]: Accepted publickey for core from 172.24.4.1 port 51306 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:44.438140 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:44.451168 systemd-logind[1559]: New session 26 of user core. Jun 25 18:38:44.461436 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:38:46.970286 containerd[1577]: time="2024-06-25T18:38:46.970216671Z" level=info msg="StopContainer for \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\" with timeout 30 (s)" Jun 25 18:38:46.981033 containerd[1577]: time="2024-06-25T18:38:46.980996552Z" level=info msg="Stop container \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\" with signal terminated" Jun 25 18:38:47.008504 containerd[1577]: time="2024-06-25T18:38:47.008378413Z" level=info msg="StopContainer for \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\" with timeout 2 (s)" Jun 25 18:38:47.010295 containerd[1577]: time="2024-06-25T18:38:47.010047242Z" level=info msg="Stop container \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\" with signal terminated" Jun 25 18:38:47.012672 containerd[1577]: time="2024-06-25T18:38:47.012629393Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:38:47.020669 systemd-networkd[1205]: lxc_health: Link DOWN Jun 25 18:38:47.020677 systemd-networkd[1205]: lxc_health: Lost carrier Jun 25 18:38:47.039761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9-rootfs.mount: Deactivated successfully. Jun 25 18:38:47.052397 containerd[1577]: time="2024-06-25T18:38:47.052191711Z" level=info msg="shim disconnected" id=e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9 namespace=k8s.io Jun 25 18:38:47.052397 containerd[1577]: time="2024-06-25T18:38:47.052349115Z" level=warning msg="cleaning up after shim disconnected" id=e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9 namespace=k8s.io Jun 25 18:38:47.052397 containerd[1577]: time="2024-06-25T18:38:47.052360817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:38:47.068694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08-rootfs.mount: Deactivated successfully. Jun 25 18:38:47.084923 containerd[1577]: time="2024-06-25T18:38:47.084745878Z" level=info msg="shim disconnected" id=633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08 namespace=k8s.io Jun 25 18:38:47.084923 containerd[1577]: time="2024-06-25T18:38:47.084806612Z" level=warning msg="cleaning up after shim disconnected" id=633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08 namespace=k8s.io Jun 25 18:38:47.084923 containerd[1577]: time="2024-06-25T18:38:47.084817272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:38:47.092617 containerd[1577]: time="2024-06-25T18:38:47.092333415Z" level=info msg="StopContainer for \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\" returns successfully" Jun 25 18:38:47.093377 containerd[1577]: time="2024-06-25T18:38:47.093140167Z" level=info msg="StopPodSandbox for \"35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0\"" Jun 25 18:38:47.096213 containerd[1577]: time="2024-06-25T18:38:47.093179631Z" level=info msg="Container to stop \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:38:47.099276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0-shm.mount: Deactivated successfully. Jun 25 18:38:47.108278 containerd[1577]: time="2024-06-25T18:38:47.108177642Z" level=info msg="StopContainer for \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\" returns successfully" Jun 25 18:38:47.110745 containerd[1577]: time="2024-06-25T18:38:47.108907601Z" level=info msg="StopPodSandbox for \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\"" Jun 25 18:38:47.110745 containerd[1577]: time="2024-06-25T18:38:47.108943588Z" level=info msg="Container to stop \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:38:47.110745 containerd[1577]: time="2024-06-25T18:38:47.109002598Z" level=info msg="Container to stop \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:38:47.110745 containerd[1577]: time="2024-06-25T18:38:47.109031522Z" level=info msg="Container to stop \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:38:47.110745 containerd[1577]: time="2024-06-25T18:38:47.109046350Z" level=info msg="Container to stop \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:38:47.110745 containerd[1577]: time="2024-06-25T18:38:47.109058072Z" level=info msg="Container to stop \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:38:47.113622 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8-shm.mount: Deactivated successfully. Jun 25 18:38:47.150785 containerd[1577]: time="2024-06-25T18:38:47.150530421Z" level=info msg="shim disconnected" id=35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0 namespace=k8s.io Jun 25 18:38:47.150785 containerd[1577]: time="2024-06-25T18:38:47.150584803Z" level=warning msg="cleaning up after shim disconnected" id=35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0 namespace=k8s.io Jun 25 18:38:47.150785 containerd[1577]: time="2024-06-25T18:38:47.150595654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:38:47.151060 containerd[1577]: time="2024-06-25T18:38:47.151029246Z" level=info msg="shim disconnected" id=1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8 namespace=k8s.io Jun 25 18:38:47.151220 containerd[1577]: time="2024-06-25T18:38:47.151135966Z" level=warning msg="cleaning up after shim disconnected" id=1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8 namespace=k8s.io Jun 25 18:38:47.151220 containerd[1577]: time="2024-06-25T18:38:47.151151806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:38:47.171824 containerd[1577]: time="2024-06-25T18:38:47.171752621Z" level=info msg="TearDown network for sandbox \"35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0\" successfully" Jun 25 18:38:47.171824 containerd[1577]: time="2024-06-25T18:38:47.171796664Z" level=info msg="StopPodSandbox for \"35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0\" returns successfully" Jun 25 18:38:47.175450 containerd[1577]: time="2024-06-25T18:38:47.175418585Z" level=info msg="TearDown network for sandbox \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" successfully" Jun 25 18:38:47.175941 containerd[1577]: time="2024-06-25T18:38:47.175540814Z" level=info msg="StopPodSandbox for \"1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8\" returns successfully" Jun 25 18:38:47.283264 kubelet[2805]: I0625 18:38:47.283189 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-cgroup\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284202 kubelet[2805]: I0625 18:38:47.283910 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cni-path\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284202 kubelet[2805]: I0625 18:38:47.283947 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-lib-modules\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284202 kubelet[2805]: I0625 18:38:47.283976 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-clustermesh-secrets\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284202 kubelet[2805]: I0625 18:38:47.284016 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-host-proc-sys-net\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284202 kubelet[2805]: I0625 18:38:47.284042 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88svp\" (UniqueName: \"kubernetes.io/projected/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-kube-api-access-88svp\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284202 kubelet[2805]: I0625 18:38:47.284072 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-host-proc-sys-kernel\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284597 kubelet[2805]: I0625 18:38:47.284094 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-hostproc\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284597 kubelet[2805]: I0625 18:38:47.284118 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtrww\" (UniqueName: \"kubernetes.io/projected/82ae9dfd-4054-4945-99d0-b9ee6c0b881f-kube-api-access-jtrww\") pod \"82ae9dfd-4054-4945-99d0-b9ee6c0b881f\" (UID: \"82ae9dfd-4054-4945-99d0-b9ee6c0b881f\") " Jun 25 18:38:47.284597 kubelet[2805]: I0625 18:38:47.284142 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ae9dfd-4054-4945-99d0-b9ee6c0b881f-cilium-config-path\") pod \"82ae9dfd-4054-4945-99d0-b9ee6c0b881f\" (UID: \"82ae9dfd-4054-4945-99d0-b9ee6c0b881f\") " Jun 25 18:38:47.284597 kubelet[2805]: I0625 18:38:47.284166 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-etc-cni-netd\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284597 kubelet[2805]: I0625 18:38:47.284185 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-xtables-lock\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.284597 kubelet[2805]: I0625 18:38:47.284205 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-run\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.285011 kubelet[2805]: I0625 18:38:47.284228 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-config-path\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.285011 kubelet[2805]: I0625 18:38:47.284248 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-bpf-maps\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.285011 kubelet[2805]: I0625 18:38:47.284284 2805 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-hubble-tls\") pod \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\" (UID: \"cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7\") " Jun 25 18:38:47.296318 kubelet[2805]: I0625 18:38:47.293833 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.296472 kubelet[2805]: I0625 18:38:47.296329 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.298741 kubelet[2805]: I0625 18:38:47.296152 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.298741 kubelet[2805]: I0625 18:38:47.296783 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.298741 kubelet[2805]: I0625 18:38:47.296803 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.300369 kubelet[2805]: I0625 18:38:47.300329 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-kube-api-access-88svp" (OuterVolumeSpecName: "kube-api-access-88svp") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "kube-api-access-88svp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:38:47.300477 kubelet[2805]: I0625 18:38:47.300380 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.300477 kubelet[2805]: I0625 18:38:47.300401 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.307221 kubelet[2805]: I0625 18:38:47.303603 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ae9dfd-4054-4945-99d0-b9ee6c0b881f-kube-api-access-jtrww" (OuterVolumeSpecName: "kube-api-access-jtrww") pod "82ae9dfd-4054-4945-99d0-b9ee6c0b881f" (UID: "82ae9dfd-4054-4945-99d0-b9ee6c0b881f"). InnerVolumeSpecName "kube-api-access-jtrww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:38:47.307221 kubelet[2805]: I0625 18:38:47.305945 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ae9dfd-4054-4945-99d0-b9ee6c0b881f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82ae9dfd-4054-4945-99d0-b9ee6c0b881f" (UID: "82ae9dfd-4054-4945-99d0-b9ee6c0b881f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:38:47.308189 kubelet[2805]: I0625 18:38:47.308146 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:38:47.308189 kubelet[2805]: I0625 18:38:47.308182 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.308424 kubelet[2805]: I0625 18:38:47.308203 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.309195 kubelet[2805]: I0625 18:38:47.309076 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:38:47.309679 kubelet[2805]: I0625 18:38:47.309113 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:38:47.311141 kubelet[2805]: I0625 18:38:47.311104 2805 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" (UID: "cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:38:47.384929 kubelet[2805]: I0625 18:38:47.384788 2805 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-host-proc-sys-kernel\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.384929 kubelet[2805]: I0625 18:38:47.384852 2805 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-hostproc\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.384929 kubelet[2805]: I0625 18:38:47.384887 2805 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jtrww\" (UniqueName: \"kubernetes.io/projected/82ae9dfd-4054-4945-99d0-b9ee6c0b881f-kube-api-access-jtrww\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.384929 kubelet[2805]: I0625 18:38:47.384924 2805 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ae9dfd-4054-4945-99d0-b9ee6c0b881f-cilium-config-path\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.384929 kubelet[2805]: I0625 18:38:47.384956 2805 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-etc-cni-netd\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386157 kubelet[2805]: I0625 18:38:47.384986 2805 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-xtables-lock\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386157 kubelet[2805]: I0625 18:38:47.385015 2805 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-run\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386157 kubelet[2805]: I0625 18:38:47.385046 2805 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-config-path\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386157 kubelet[2805]: I0625 18:38:47.385074 2805 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-bpf-maps\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386157 kubelet[2805]: I0625 18:38:47.385101 2805 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-hubble-tls\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386157 kubelet[2805]: I0625 18:38:47.385148 2805 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-lib-modules\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386157 kubelet[2805]: I0625 18:38:47.385178 2805 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-clustermesh-secrets\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386564 kubelet[2805]: I0625 18:38:47.385205 2805 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cilium-cgroup\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386564 kubelet[2805]: I0625 18:38:47.385233 2805 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-cni-path\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386564 kubelet[2805]: I0625 18:38:47.385265 2805 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-host-proc-sys-net\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.386564 kubelet[2805]: I0625 18:38:47.385296 2805 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-88svp\" (UniqueName: \"kubernetes.io/projected/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7-kube-api-access-88svp\") on node \"ci-4012-0-0-f-7092d20389.novalocal\" DevicePath \"\"" Jun 25 18:38:47.429486 kubelet[2805]: I0625 18:38:47.428551 2805 scope.go:117] "RemoveContainer" containerID="633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08" Jun 25 18:38:47.468137 containerd[1577]: time="2024-06-25T18:38:47.467897680Z" level=info msg="RemoveContainer for \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\"" Jun 25 18:38:47.489433 containerd[1577]: time="2024-06-25T18:38:47.489189021Z" level=info msg="RemoveContainer for \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\" returns successfully" Jun 25 18:38:47.504333 kubelet[2805]: I0625 18:38:47.504133 2805 scope.go:117] "RemoveContainer" containerID="5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd" Jun 25 18:38:47.507559 containerd[1577]: time="2024-06-25T18:38:47.507444890Z" level=info msg="RemoveContainer for \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\"" Jun 25 18:38:47.526838 containerd[1577]: time="2024-06-25T18:38:47.526731350Z" level=info msg="RemoveContainer for \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\" returns successfully" Jun 25 18:38:47.528038 kubelet[2805]: I0625 18:38:47.527975 2805 scope.go:117] "RemoveContainer" containerID="542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7" Jun 25 18:38:47.532980 containerd[1577]: time="2024-06-25T18:38:47.532802574Z" level=info msg="RemoveContainer for \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\"" Jun 25 18:38:47.538890 containerd[1577]: time="2024-06-25T18:38:47.538060642Z" level=info msg="RemoveContainer for \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\" returns successfully" Jun 25 18:38:47.539133 kubelet[2805]: I0625 18:38:47.539019 2805 scope.go:117] "RemoveContainer" containerID="0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280" Jun 25 18:38:47.543751 containerd[1577]: time="2024-06-25T18:38:47.543640925Z" level=info msg="RemoveContainer for \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\"" Jun 25 18:38:47.549892 containerd[1577]: time="2024-06-25T18:38:47.549853733Z" level=info msg="RemoveContainer for \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\" returns successfully" Jun 25 18:38:47.550310 kubelet[2805]: I0625 18:38:47.550128 2805 scope.go:117] "RemoveContainer" containerID="18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c" Jun 25 18:38:47.551858 containerd[1577]: time="2024-06-25T18:38:47.551691479Z" level=info msg="RemoveContainer for \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\"" Jun 25 18:38:47.555704 containerd[1577]: time="2024-06-25T18:38:47.555607892Z" level=info msg="RemoveContainer for \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\" returns successfully" Jun 25 18:38:47.555946 kubelet[2805]: I0625 18:38:47.555889 2805 scope.go:117] "RemoveContainer" containerID="633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08" Jun 25 18:38:47.556258 containerd[1577]: time="2024-06-25T18:38:47.556190976Z" level=error msg="ContainerStatus for \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\": not found" Jun 25 18:38:47.556526 kubelet[2805]: E0625 18:38:47.556402 2805 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\": not found" containerID="633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08" Jun 25 18:38:47.567672 kubelet[2805]: I0625 18:38:47.567643 2805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08"} err="failed to get container status \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\": rpc error: code = NotFound desc = an error occurred when try to find container \"633d75a9bda804e264676d6e010575203bdd885b05b82281c80b9d4b4fc87a08\": not found" Jun 25 18:38:47.567947 kubelet[2805]: I0625 18:38:47.567827 2805 scope.go:117] "RemoveContainer" containerID="5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd" Jun 25 18:38:47.568183 containerd[1577]: time="2024-06-25T18:38:47.568134119Z" level=error msg="ContainerStatus for \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\": not found" Jun 25 18:38:47.568433 kubelet[2805]: E0625 18:38:47.568324 2805 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\": not found" containerID="5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd" Jun 25 18:38:47.568433 kubelet[2805]: I0625 18:38:47.568358 2805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd"} err="failed to get container status \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"5aa5a4c7666c0a301cf815cb12474390c3097c1842a8e4315d6b510638c072dd\": not found" Jun 25 18:38:47.568433 kubelet[2805]: I0625 18:38:47.568375 2805 scope.go:117] "RemoveContainer" containerID="542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7" Jun 25 18:38:47.569048 containerd[1577]: time="2024-06-25T18:38:47.568747759Z" level=error msg="ContainerStatus for \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\": not found" Jun 25 18:38:47.569120 kubelet[2805]: E0625 18:38:47.568906 2805 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\": not found" containerID="542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7" Jun 25 18:38:47.569120 kubelet[2805]: I0625 18:38:47.568976 2805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7"} err="failed to get container status \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"542756e9ec6c7bf28fa71883bc1ee5890b4165b84284c149f1eac1afa4eac7d7\": not found" Jun 25 18:38:47.569120 kubelet[2805]: I0625 18:38:47.568991 2805 scope.go:117] "RemoveContainer" containerID="0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280" Jun 25 18:38:47.569213 containerd[1577]: time="2024-06-25T18:38:47.569147880Z" level=error msg="ContainerStatus for \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\": not found" Jun 25 18:38:47.569318 kubelet[2805]: E0625 18:38:47.569306 2805 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\": not found" containerID="0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280" Jun 25 18:38:47.569533 kubelet[2805]: I0625 18:38:47.569443 2805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280"} err="failed to get container status \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ad0b261246ad9a3d26263de3b8cddb2ce21f16b0d979dc1e0f4a8d397321280\": not found" Jun 25 18:38:47.569533 kubelet[2805]: I0625 18:38:47.569457 2805 scope.go:117] "RemoveContainer" containerID="18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c" Jun 25 18:38:47.569883 containerd[1577]: time="2024-06-25T18:38:47.569671942Z" level=error msg="ContainerStatus for \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\": not found" Jun 25 18:38:47.569946 kubelet[2805]: E0625 18:38:47.569798 2805 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\": not found" containerID="18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c" Jun 25 18:38:47.569946 kubelet[2805]: I0625 18:38:47.569821 2805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c"} err="failed to get container status \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\": rpc error: code = NotFound desc = an error occurred when try to find container \"18301d42c591b7ce311121093dc50288c79c10fe85015b2962a3d6f6b6db311c\": not found" Jun 25 18:38:47.569946 kubelet[2805]: I0625 18:38:47.569831 2805 scope.go:117] "RemoveContainer" containerID="e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9" Jun 25 18:38:47.571396 containerd[1577]: time="2024-06-25T18:38:47.571088759Z" level=info msg="RemoveContainer for \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\"" Jun 25 18:38:47.574845 containerd[1577]: time="2024-06-25T18:38:47.574779819Z" level=info msg="RemoveContainer for \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\" returns successfully" Jun 25 18:38:47.575066 kubelet[2805]: I0625 18:38:47.575046 2805 scope.go:117] "RemoveContainer" containerID="e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9" Jun 25 18:38:47.575320 containerd[1577]: time="2024-06-25T18:38:47.575264508Z" level=error msg="ContainerStatus for \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\": not found" Jun 25 18:38:47.575544 kubelet[2805]: E0625 18:38:47.575463 2805 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\": not found" containerID="e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9" Jun 25 18:38:47.575544 kubelet[2805]: I0625 18:38:47.575502 2805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9"} err="failed to get container status \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e168a36204fef6def57d3577a07860ec6414b5bc6fdb62cadba60f04c6d8a1f9\": not found" Jun 25 18:38:47.966701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e4a55d601ec28d694e821ae3fdb87ef47d0aa5086f5ee67e9340aa3750230b8-rootfs.mount: Deactivated successfully. Jun 25 18:38:47.967123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35739967944fcc4ee72ada3765ca7edb46c6c909c904257f64417fc5de4c81a0-rootfs.mount: Deactivated successfully. Jun 25 18:38:47.967411 systemd[1]: var-lib-kubelet-pods-cdf0f49a\x2d0d13\x2d4f02\x2dbc8d\x2d3f3e65dc03b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 18:38:47.967798 systemd[1]: var-lib-kubelet-pods-82ae9dfd\x2d4054\x2d4945\x2d99d0\x2db9ee6c0b881f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djtrww.mount: Deactivated successfully. Jun 25 18:38:47.968123 systemd[1]: var-lib-kubelet-pods-cdf0f49a\x2d0d13\x2d4f02\x2dbc8d\x2d3f3e65dc03b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d88svp.mount: Deactivated successfully. Jun 25 18:38:47.968435 systemd[1]: var-lib-kubelet-pods-cdf0f49a\x2d0d13\x2d4f02\x2dbc8d\x2d3f3e65dc03b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 18:38:48.629081 kubelet[2805]: I0625 18:38:48.628955 2805 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="82ae9dfd-4054-4945-99d0-b9ee6c0b881f" path="/var/lib/kubelet/pods/82ae9dfd-4054-4945-99d0-b9ee6c0b881f/volumes" Jun 25 18:38:48.630180 kubelet[2805]: I0625 18:38:48.630077 2805 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" path="/var/lib/kubelet/pods/cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7/volumes" Jun 25 18:38:48.957156 sshd[4371]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:48.967622 systemd[1]: Started sshd@24-172.24.4.45:22-172.24.4.1:51226.service - OpenSSH per-connection server daemon (172.24.4.1:51226). Jun 25 18:38:48.971383 systemd[1]: sshd@23-172.24.4.45:22-172.24.4.1:51306.service: Deactivated successfully. Jun 25 18:38:48.982188 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:38:48.990302 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:38:48.994172 systemd-logind[1559]: Removed session 26. Jun 25 18:38:50.080822 sshd[4539]: Accepted publickey for core from 172.24.4.1 port 51226 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:50.084138 sshd[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:50.097025 systemd-logind[1559]: New session 27 of user core. Jun 25 18:38:50.102622 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:38:50.837944 kubelet[2805]: E0625 18:38:50.837850 2805 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:38:52.242104 kubelet[2805]: I0625 18:38:52.241926 2805 topology_manager.go:215] "Topology Admit Handler" podUID="08e4f398-8772-4622-bde6-0e1cb563addd" podNamespace="kube-system" podName="cilium-gbqhg" Jun 25 18:38:52.251776 kubelet[2805]: E0625 18:38:52.251622 2805 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" containerName="mount-cgroup" Jun 25 18:38:52.251776 kubelet[2805]: E0625 18:38:52.251661 2805 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" containerName="mount-bpf-fs" Jun 25 18:38:52.251776 kubelet[2805]: E0625 18:38:52.251671 2805 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" containerName="cilium-agent" Jun 25 18:38:52.251776 kubelet[2805]: E0625 18:38:52.251680 2805 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82ae9dfd-4054-4945-99d0-b9ee6c0b881f" containerName="cilium-operator" Jun 25 18:38:52.251776 kubelet[2805]: E0625 18:38:52.251689 2805 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" containerName="apply-sysctl-overwrites" Jun 25 18:38:52.251776 kubelet[2805]: E0625 18:38:52.251696 2805 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" containerName="clean-cilium-state" Jun 25 18:38:52.256734 kubelet[2805]: I0625 18:38:52.256510 2805 memory_manager.go:346] "RemoveStaleState removing state" podUID="82ae9dfd-4054-4945-99d0-b9ee6c0b881f" containerName="cilium-operator" Jun 25 18:38:52.256734 kubelet[2805]: I0625 18:38:52.256534 2805 memory_manager.go:346] "RemoveStaleState removing state" podUID="cdf0f49a-0d13-4f02-bc8d-3f3e65dc03b7" containerName="cilium-agent" Jun 25 18:38:52.379651 sshd[4539]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:52.392062 systemd[1]: Started sshd@25-172.24.4.45:22-172.24.4.1:51242.service - OpenSSH per-connection server daemon (172.24.4.1:51242). Jun 25 18:38:52.397164 systemd[1]: sshd@24-172.24.4.45:22-172.24.4.1:51226.service: Deactivated successfully. Jun 25 18:38:52.416412 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:38:52.422154 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:38:52.426835 systemd-logind[1559]: Removed session 27. Jun 25 18:38:52.427586 kubelet[2805]: I0625 18:38:52.426648 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/08e4f398-8772-4622-bde6-0e1cb563addd-hubble-tls\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.427586 kubelet[2805]: I0625 18:38:52.426831 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-cilium-cgroup\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.427586 kubelet[2805]: I0625 18:38:52.426949 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/08e4f398-8772-4622-bde6-0e1cb563addd-clustermesh-secrets\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.427586 kubelet[2805]: I0625 18:38:52.427036 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-bpf-maps\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.427586 kubelet[2805]: I0625 18:38:52.427174 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2snx\" (UniqueName: \"kubernetes.io/projected/08e4f398-8772-4622-bde6-0e1cb563addd-kube-api-access-t2snx\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.427586 kubelet[2805]: I0625 18:38:52.427301 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08e4f398-8772-4622-bde6-0e1cb563addd-cilium-config-path\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428210 kubelet[2805]: I0625 18:38:52.427386 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-cilium-run\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428210 kubelet[2805]: I0625 18:38:52.427471 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-host-proc-sys-net\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428210 kubelet[2805]: I0625 18:38:52.427551 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-host-proc-sys-kernel\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428210 kubelet[2805]: I0625 18:38:52.427634 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-hostproc\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428210 kubelet[2805]: I0625 18:38:52.427792 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-cni-path\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428210 kubelet[2805]: I0625 18:38:52.427886 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-xtables-lock\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428606 kubelet[2805]: I0625 18:38:52.427978 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/08e4f398-8772-4622-bde6-0e1cb563addd-cilium-ipsec-secrets\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428606 kubelet[2805]: I0625 18:38:52.428066 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-etc-cni-netd\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.428606 kubelet[2805]: I0625 18:38:52.428151 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08e4f398-8772-4622-bde6-0e1cb563addd-lib-modules\") pod \"cilium-gbqhg\" (UID: \"08e4f398-8772-4622-bde6-0e1cb563addd\") " pod="kube-system/cilium-gbqhg" Jun 25 18:38:52.865040 containerd[1577]: time="2024-06-25T18:38:52.864919185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gbqhg,Uid:08e4f398-8772-4622-bde6-0e1cb563addd,Namespace:kube-system,Attempt:0,}" Jun 25 18:38:52.917421 containerd[1577]: time="2024-06-25T18:38:52.917211472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:52.917924 containerd[1577]: time="2024-06-25T18:38:52.917426465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:52.917924 containerd[1577]: time="2024-06-25T18:38:52.917545759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:52.917924 containerd[1577]: time="2024-06-25T18:38:52.917592646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:52.989012 containerd[1577]: time="2024-06-25T18:38:52.988908941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gbqhg,Uid:08e4f398-8772-4622-bde6-0e1cb563addd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\"" Jun 25 18:38:52.998309 containerd[1577]: time="2024-06-25T18:38:52.998272787Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:38:53.012342 containerd[1577]: time="2024-06-25T18:38:53.011903727Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c67dc4bb47c5f8fa5cde2af6e1eff3ab607bcf3e7136c31971255f8d5049963c\"" Jun 25 18:38:53.012573 containerd[1577]: time="2024-06-25T18:38:53.012535356Z" level=info msg="StartContainer for \"c67dc4bb47c5f8fa5cde2af6e1eff3ab607bcf3e7136c31971255f8d5049963c\"" Jun 25 18:38:53.062345 containerd[1577]: time="2024-06-25T18:38:53.062287553Z" level=info msg="StartContainer for \"c67dc4bb47c5f8fa5cde2af6e1eff3ab607bcf3e7136c31971255f8d5049963c\" returns successfully" Jun 25 18:38:53.112754 containerd[1577]: time="2024-06-25T18:38:53.112685055Z" level=info msg="shim disconnected" id=c67dc4bb47c5f8fa5cde2af6e1eff3ab607bcf3e7136c31971255f8d5049963c namespace=k8s.io Jun 25 18:38:53.112754 containerd[1577]: time="2024-06-25T18:38:53.112746301Z" level=warning msg="cleaning up after shim disconnected" id=c67dc4bb47c5f8fa5cde2af6e1eff3ab607bcf3e7136c31971255f8d5049963c namespace=k8s.io Jun 25 18:38:53.112754 containerd[1577]: time="2024-06-25T18:38:53.112757051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:38:53.512406 containerd[1577]: time="2024-06-25T18:38:53.511694197Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:38:53.542216 containerd[1577]: time="2024-06-25T18:38:53.541527849Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a80218cdefa1aad993bb2461af727d8b06597d83f8e35ddb29ac55a69da82b78\"" Jun 25 18:38:53.547227 containerd[1577]: time="2024-06-25T18:38:53.547139321Z" level=info msg="StartContainer for \"a80218cdefa1aad993bb2461af727d8b06597d83f8e35ddb29ac55a69da82b78\"" Jun 25 18:38:53.603693 systemd[1]: run-containerd-runc-k8s.io-a80218cdefa1aad993bb2461af727d8b06597d83f8e35ddb29ac55a69da82b78-runc.1tKYZd.mount: Deactivated successfully. Jun 25 18:38:53.626471 kubelet[2805]: I0625 18:38:53.626444 2805 setters.go:552] "Node became not ready" node="ci-4012-0-0-f-7092d20389.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-25T18:38:53Z","lastTransitionTime":"2024-06-25T18:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 25 18:38:53.640940 containerd[1577]: time="2024-06-25T18:38:53.640838077Z" level=info msg="StartContainer for \"a80218cdefa1aad993bb2461af727d8b06597d83f8e35ddb29ac55a69da82b78\" returns successfully" Jun 25 18:38:53.670437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a80218cdefa1aad993bb2461af727d8b06597d83f8e35ddb29ac55a69da82b78-rootfs.mount: Deactivated successfully. Jun 25 18:38:53.679119 containerd[1577]: time="2024-06-25T18:38:53.679010299Z" level=info msg="shim disconnected" id=a80218cdefa1aad993bb2461af727d8b06597d83f8e35ddb29ac55a69da82b78 namespace=k8s.io Jun 25 18:38:53.679301 containerd[1577]: time="2024-06-25T18:38:53.679227468Z" level=warning msg="cleaning up after shim disconnected" id=a80218cdefa1aad993bb2461af727d8b06597d83f8e35ddb29ac55a69da82b78 namespace=k8s.io Jun 25 18:38:53.679301 containerd[1577]: time="2024-06-25T18:38:53.679244160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:38:53.685705 sshd[4553]: Accepted publickey for core from 172.24.4.1 port 51242 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:53.686902 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:53.694077 systemd-logind[1559]: New session 28 of user core. Jun 25 18:38:53.699093 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:38:54.304129 sshd[4553]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:54.316358 systemd[1]: Started sshd@26-172.24.4.45:22-172.24.4.1:51258.service - OpenSSH per-connection server daemon (172.24.4.1:51258). Jun 25 18:38:54.317398 systemd[1]: sshd@25-172.24.4.45:22-172.24.4.1:51242.service: Deactivated successfully. Jun 25 18:38:54.329140 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:38:54.332082 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:38:54.336128 systemd-logind[1559]: Removed session 28. Jun 25 18:38:54.526667 containerd[1577]: time="2024-06-25T18:38:54.526585755Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:38:54.572376 containerd[1577]: time="2024-06-25T18:38:54.572062200Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8dc2094b0331745730239d931ba135d4860dbdf0071883ce2b545adca3c820ab\"" Jun 25 18:38:54.575411 containerd[1577]: time="2024-06-25T18:38:54.575012338Z" level=info msg="StartContainer for \"8dc2094b0331745730239d931ba135d4860dbdf0071883ce2b545adca3c820ab\"" Jun 25 18:38:54.671067 containerd[1577]: time="2024-06-25T18:38:54.670966100Z" level=info msg="StartContainer for \"8dc2094b0331745730239d931ba135d4860dbdf0071883ce2b545adca3c820ab\" returns successfully" Jun 25 18:38:54.693293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dc2094b0331745730239d931ba135d4860dbdf0071883ce2b545adca3c820ab-rootfs.mount: Deactivated successfully. Jun 25 18:38:54.703444 containerd[1577]: time="2024-06-25T18:38:54.703268519Z" level=info msg="shim disconnected" id=8dc2094b0331745730239d931ba135d4860dbdf0071883ce2b545adca3c820ab namespace=k8s.io Jun 25 18:38:54.703444 containerd[1577]: time="2024-06-25T18:38:54.703319776Z" level=warning msg="cleaning up after shim disconnected" id=8dc2094b0331745730239d931ba135d4860dbdf0071883ce2b545adca3c820ab namespace=k8s.io Jun 25 18:38:54.703444 containerd[1577]: time="2024-06-25T18:38:54.703330947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:38:55.532733 containerd[1577]: time="2024-06-25T18:38:55.531095201Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:38:55.560009 containerd[1577]: time="2024-06-25T18:38:55.559823936Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"11c13ab95f751152b58b135d6770015b055d1e1d67bb2368f2fa852f2680de10\"" Jun 25 18:38:55.561691 containerd[1577]: time="2024-06-25T18:38:55.560521801Z" level=info msg="StartContainer for \"11c13ab95f751152b58b135d6770015b055d1e1d67bb2368f2fa852f2680de10\"" Jun 25 18:38:55.628169 containerd[1577]: time="2024-06-25T18:38:55.628055067Z" level=info msg="StartContainer for \"11c13ab95f751152b58b135d6770015b055d1e1d67bb2368f2fa852f2680de10\" returns successfully" Jun 25 18:38:55.647251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11c13ab95f751152b58b135d6770015b055d1e1d67bb2368f2fa852f2680de10-rootfs.mount: Deactivated successfully. Jun 25 18:38:55.655730 containerd[1577]: time="2024-06-25T18:38:55.655526252Z" level=info msg="shim disconnected" id=11c13ab95f751152b58b135d6770015b055d1e1d67bb2368f2fa852f2680de10 namespace=k8s.io Jun 25 18:38:55.655730 containerd[1577]: time="2024-06-25T18:38:55.655683177Z" level=warning msg="cleaning up after shim disconnected" id=11c13ab95f751152b58b135d6770015b055d1e1d67bb2368f2fa852f2680de10 namespace=k8s.io Jun 25 18:38:55.655730 containerd[1577]: time="2024-06-25T18:38:55.655697404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:38:55.842877 kubelet[2805]: E0625 18:38:55.842674 2805 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:38:55.890586 sshd[4736]: Accepted publickey for core from 172.24.4.1 port 51258 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:38:55.894055 sshd[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:55.904598 systemd-logind[1559]: New session 29 of user core. Jun 25 18:38:55.913390 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 18:38:56.552130 containerd[1577]: time="2024-06-25T18:38:56.551422624Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:38:56.591308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83973058.mount: Deactivated successfully. Jun 25 18:38:56.616226 containerd[1577]: time="2024-06-25T18:38:56.616151238Z" level=info msg="CreateContainer within sandbox \"c13ee363377762ae745f72361c7530020bbaf5ccc5cd427f4d184bbf4fad84ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed00a6de342a88085fa1dccd024b8d06371559126d82ca977f85338d1260357e\"" Jun 25 18:38:56.617446 containerd[1577]: time="2024-06-25T18:38:56.617423095Z" level=info msg="StartContainer for \"ed00a6de342a88085fa1dccd024b8d06371559126d82ca977f85338d1260357e\"" Jun 25 18:38:56.655365 systemd[1]: run-containerd-runc-k8s.io-ed00a6de342a88085fa1dccd024b8d06371559126d82ca977f85338d1260357e-runc.x0C0K3.mount: Deactivated successfully. Jun 25 18:38:56.691485 containerd[1577]: time="2024-06-25T18:38:56.690917530Z" level=info msg="StartContainer for \"ed00a6de342a88085fa1dccd024b8d06371559126d82ca977f85338d1260357e\" returns successfully" Jun 25 18:38:57.511780 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:38:57.587209 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jun 25 18:39:01.027032 systemd-networkd[1205]: lxc_health: Link UP Jun 25 18:39:01.072897 systemd-networkd[1205]: lxc_health: Gained carrier Jun 25 18:39:02.292218 systemd-networkd[1205]: lxc_health: Gained IPv6LL Jun 25 18:39:02.900446 kubelet[2805]: I0625 18:39:02.900396 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gbqhg" podStartSLOduration=10.900353237000001 podCreationTimestamp="2024-06-25 18:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:38:57.579512184 +0000 UTC m=+157.139461190" watchObservedRunningTime="2024-06-25 18:39:02.900353237 +0000 UTC m=+162.460302243" Jun 25 18:39:03.482934 kubelet[2805]: E0625 18:39:03.482835 2805 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51952->127.0.0.1:41465: write tcp 127.0.0.1:51952->127.0.0.1:41465: write: broken pipe Jun 25 18:39:08.239299 sshd[4736]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:08.247981 systemd[1]: sshd@26-172.24.4.45:22-172.24.4.1:51258.service: Deactivated successfully. Jun 25 18:39:08.257853 systemd-logind[1559]: Session 29 logged out. Waiting for processes to exit. Jun 25 18:39:08.259643 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 18:39:08.263601 systemd-logind[1559]: Removed session 29.