Jun 25 18:51:39.055166 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:51:39.055191 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:51:39.055204 kernel: BIOS-provided physical RAM map: Jun 25 18:51:39.055212 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:51:39.055220 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:51:39.055228 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:51:39.055238 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jun 25 18:51:39.055246 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jun 25 18:51:39.055254 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 18:51:39.055264 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:51:39.055273 kernel: NX (Execute Disable) protection: active Jun 25 18:51:39.055281 kernel: APIC: Static calls initialized Jun 25 18:51:39.055289 kernel: SMBIOS 2.8 present. Jun 25 18:51:39.055297 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jun 25 18:51:39.055307 kernel: Hypervisor detected: KVM Jun 25 18:51:39.055318 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:51:39.055327 kernel: kvm-clock: using sched offset of 4476788594 cycles Jun 25 18:51:39.055336 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:51:39.055345 kernel: tsc: Detected 1996.249 MHz processor Jun 25 18:51:39.055354 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:51:39.055363 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:51:39.055372 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jun 25 18:51:39.055381 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:51:39.055390 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:51:39.055401 kernel: ACPI: Early table checksum verification disabled Jun 25 18:51:39.055410 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jun 25 18:51:39.055419 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:51:39.055429 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:51:39.055438 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:51:39.055447 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 25 18:51:39.055455 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:51:39.055464 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:51:39.055473 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jun 25 18:51:39.055484 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jun 25 18:51:39.055493 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 25 18:51:39.055502 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jun 25 18:51:39.055511 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jun 25 18:51:39.055520 kernel: No NUMA configuration found Jun 25 18:51:39.055529 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jun 25 18:51:39.055538 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jun 25 18:51:39.055550 kernel: Zone ranges: Jun 25 18:51:39.055561 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:51:39.055571 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jun 25 18:51:39.055580 kernel: Normal empty Jun 25 18:51:39.055589 kernel: Movable zone start for each node Jun 25 18:51:39.055599 kernel: Early memory node ranges Jun 25 18:51:39.055608 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:51:39.055617 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jun 25 18:51:39.055628 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jun 25 18:51:39.055638 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:51:39.055647 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:51:39.055657 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jun 25 18:51:39.055666 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 18:51:39.055675 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:51:39.055684 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:51:39.055694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 18:51:39.055703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:51:39.055714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:51:39.055724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:51:39.055733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:51:39.055743 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:51:39.055752 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:51:39.055762 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:51:39.055773 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 25 18:51:39.055781 kernel: Booting paravirtualized kernel on KVM Jun 25 18:51:39.055790 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:51:39.055801 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:51:39.055810 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:51:39.055819 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:51:39.055828 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:51:39.055836 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 25 18:51:39.055847 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:51:39.055856 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:51:39.055865 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:51:39.055875 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:51:39.055884 kernel: Fallback order for Node 0: 0 Jun 25 18:51:39.055893 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jun 25 18:51:39.055902 kernel: Policy zone: DMA32 Jun 25 18:51:39.055910 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:51:39.055919 kernel: Memory: 1965064K/2096620K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131296K reserved, 0K cma-reserved) Jun 25 18:51:39.057476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:51:39.057490 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:51:39.057502 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:51:39.057511 kernel: Dynamic Preempt: voluntary Jun 25 18:51:39.057519 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:51:39.057529 kernel: rcu: RCU event tracing is enabled. Jun 25 18:51:39.057538 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:51:39.057547 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:51:39.057556 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:51:39.057564 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:51:39.057573 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:51:39.057582 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:51:39.057593 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 18:51:39.057602 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:51:39.057611 kernel: Console: colour VGA+ 80x25 Jun 25 18:51:39.057619 kernel: printk: console [tty0] enabled Jun 25 18:51:39.057628 kernel: printk: console [ttyS0] enabled Jun 25 18:51:39.057637 kernel: ACPI: Core revision 20230628 Jun 25 18:51:39.057645 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:51:39.057654 kernel: x2apic enabled Jun 25 18:51:39.057663 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:51:39.057674 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 18:51:39.057683 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 18:51:39.057692 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 25 18:51:39.057701 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 25 18:51:39.057709 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 25 18:51:39.057718 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:51:39.057727 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:51:39.057736 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:51:39.057745 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:51:39.057755 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:51:39.057764 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 25 18:51:39.057772 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:51:39.057781 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:51:39.057790 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:51:39.057799 kernel: SELinux: Initializing. Jun 25 18:51:39.057807 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:51:39.057817 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:51:39.057835 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 25 18:51:39.057845 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:51:39.057854 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:51:39.057865 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:51:39.057874 kernel: Performance Events: AMD PMU driver. Jun 25 18:51:39.057883 kernel: ... version: 0 Jun 25 18:51:39.057893 kernel: ... bit width: 48 Jun 25 18:51:39.057902 kernel: ... generic registers: 4 Jun 25 18:51:39.057913 kernel: ... value mask: 0000ffffffffffff Jun 25 18:51:39.057922 kernel: ... max period: 00007fffffffffff Jun 25 18:51:39.057951 kernel: ... fixed-purpose events: 0 Jun 25 18:51:39.057961 kernel: ... event mask: 000000000000000f Jun 25 18:51:39.057971 kernel: signal: max sigframe size: 1440 Jun 25 18:51:39.057980 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:51:39.057989 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:51:39.057999 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:51:39.058008 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:51:39.058018 kernel: .... node #0, CPUs: #1 Jun 25 18:51:39.058029 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:51:39.058038 kernel: smpboot: Max logical packages: 2 Jun 25 18:51:39.058048 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 25 18:51:39.058057 kernel: devtmpfs: initialized Jun 25 18:51:39.058066 kernel: x86/mm: Memory block size: 128MB Jun 25 18:51:39.058076 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:51:39.058085 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:51:39.058094 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:51:39.058104 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:51:39.058115 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:51:39.058124 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:51:39.058133 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:51:39.058143 kernel: audit: type=2000 audit(1719341497.262:1): state=initialized audit_enabled=0 res=1 Jun 25 18:51:39.058152 kernel: cpuidle: using governor menu Jun 25 18:51:39.058161 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:51:39.058170 kernel: dca service started, version 1.12.1 Jun 25 18:51:39.058179 kernel: PCI: Using configuration type 1 for base access Jun 25 18:51:39.058189 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:51:39.058200 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:51:39.058209 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:51:39.058219 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:51:39.058228 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:51:39.058237 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:51:39.058247 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:51:39.058256 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:51:39.058265 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:51:39.058274 kernel: ACPI: Interpreter enabled Jun 25 18:51:39.058285 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 18:51:39.058295 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:51:39.058304 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:51:39.058314 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:51:39.058332 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 18:51:39.058342 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:51:39.058479 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:51:39.058586 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 18:51:39.058700 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 25 18:51:39.058716 kernel: acpiphp: Slot [3] registered Jun 25 18:51:39.058726 kernel: acpiphp: Slot [4] registered Jun 25 18:51:39.058737 kernel: acpiphp: Slot [5] registered Jun 25 18:51:39.058746 kernel: acpiphp: Slot [6] registered Jun 25 18:51:39.058756 kernel: acpiphp: Slot [7] registered Jun 25 18:51:39.058766 kernel: acpiphp: Slot [8] registered Jun 25 18:51:39.058776 kernel: acpiphp: Slot [9] registered Jun 25 18:51:39.058789 kernel: acpiphp: Slot [10] registered Jun 25 18:51:39.058799 kernel: acpiphp: Slot [11] registered Jun 25 18:51:39.058809 kernel: acpiphp: Slot [12] registered Jun 25 18:51:39.058819 kernel: acpiphp: Slot [13] registered Jun 25 18:51:39.058828 kernel: acpiphp: Slot [14] registered Jun 25 18:51:39.058838 kernel: acpiphp: Slot [15] registered Jun 25 18:51:39.058848 kernel: acpiphp: Slot [16] registered Jun 25 18:51:39.058858 kernel: acpiphp: Slot [17] registered Jun 25 18:51:39.058867 kernel: acpiphp: Slot [18] registered Jun 25 18:51:39.058877 kernel: acpiphp: Slot [19] registered Jun 25 18:51:39.058889 kernel: acpiphp: Slot [20] registered Jun 25 18:51:39.058899 kernel: acpiphp: Slot [21] registered Jun 25 18:51:39.058909 kernel: acpiphp: Slot [22] registered Jun 25 18:51:39.058918 kernel: acpiphp: Slot [23] registered Jun 25 18:51:39.058966 kernel: acpiphp: Slot [24] registered Jun 25 18:51:39.058976 kernel: acpiphp: Slot [25] registered Jun 25 18:51:39.058986 kernel: acpiphp: Slot [26] registered Jun 25 18:51:39.058996 kernel: acpiphp: Slot [27] registered Jun 25 18:51:39.059006 kernel: acpiphp: Slot [28] registered Jun 25 18:51:39.059019 kernel: acpiphp: Slot [29] registered Jun 25 18:51:39.059028 kernel: acpiphp: Slot [30] registered Jun 25 18:51:39.059038 kernel: acpiphp: Slot [31] registered Jun 25 18:51:39.059048 kernel: PCI host bridge to bus 0000:00 Jun 25 18:51:39.059152 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:51:39.059241 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:51:39.059329 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:51:39.059422 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 18:51:39.059514 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 18:51:39.059599 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:51:39.059721 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:51:39.059829 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:51:39.061020 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 18:51:39.061217 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jun 25 18:51:39.061336 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 18:51:39.061438 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 18:51:39.061540 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 18:51:39.061644 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 18:51:39.061758 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 18:51:39.061861 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 18:51:39.062000 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 18:51:39.062124 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 25 18:51:39.062226 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 25 18:51:39.062379 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 25 18:51:39.062494 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jun 25 18:51:39.062602 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jun 25 18:51:39.062709 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:51:39.062831 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 25 18:51:39.062999 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jun 25 18:51:39.063111 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jun 25 18:51:39.063221 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 25 18:51:39.063327 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jun 25 18:51:39.063448 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 18:51:39.063561 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 18:51:39.063670 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jun 25 18:51:39.063784 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 25 18:51:39.063901 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jun 25 18:51:39.064051 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jun 25 18:51:39.064162 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 25 18:51:39.064280 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:51:39.064389 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jun 25 18:51:39.064502 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 25 18:51:39.064519 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:51:39.064532 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:51:39.064544 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:51:39.064556 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:51:39.064568 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:51:39.064579 kernel: iommu: Default domain type: Translated Jun 25 18:51:39.064591 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:51:39.064603 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:51:39.064618 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:51:39.064629 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:51:39.064641 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jun 25 18:51:39.064748 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 18:51:39.064854 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 18:51:39.065014 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:51:39.065033 kernel: vgaarb: loaded Jun 25 18:51:39.065045 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:51:39.065087 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:51:39.065104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:51:39.065116 kernel: pnp: PnP ACPI init Jun 25 18:51:39.065231 kernel: pnp 00:03: [dma 2] Jun 25 18:51:39.065254 kernel: pnp: PnP ACPI: found 5 devices Jun 25 18:51:39.065266 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:51:39.065278 kernel: NET: Registered PF_INET protocol family Jun 25 18:51:39.065290 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:51:39.065302 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 18:51:39.065318 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:51:39.065330 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:51:39.065342 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 18:51:39.065353 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 18:51:39.065365 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:51:39.065376 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:51:39.065388 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:51:39.065399 kernel: NET: Registered PF_XDP protocol family Jun 25 18:51:39.065506 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:51:39.065609 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:51:39.065723 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:51:39.065834 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 18:51:39.067860 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 18:51:39.068024 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 18:51:39.068128 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:51:39.068143 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:51:39.068154 kernel: Initialise system trusted keyrings Jun 25 18:51:39.068170 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 18:51:39.068181 kernel: Key type asymmetric registered Jun 25 18:51:39.068191 kernel: Asymmetric key parser 'x509' registered Jun 25 18:51:39.068201 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:51:39.068211 kernel: io scheduler mq-deadline registered Jun 25 18:51:39.068222 kernel: io scheduler kyber registered Jun 25 18:51:39.068232 kernel: io scheduler bfq registered Jun 25 18:51:39.068243 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:51:39.068255 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 25 18:51:39.068267 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 18:51:39.068278 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 18:51:39.068288 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 18:51:39.068298 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:51:39.068309 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:51:39.068319 kernel: random: crng init done Jun 25 18:51:39.068329 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:51:39.068340 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:51:39.068350 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:51:39.068465 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 25 18:51:39.068483 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:51:39.068574 kernel: rtc_cmos 00:04: registered as rtc0 Jun 25 18:51:39.068663 kernel: rtc_cmos 00:04: setting system clock to 2024-06-25T18:51:38 UTC (1719341498) Jun 25 18:51:39.068749 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 25 18:51:39.068764 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 18:51:39.068775 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:51:39.068785 kernel: Segment Routing with IPv6 Jun 25 18:51:39.068800 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:51:39.068810 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:51:39.068820 kernel: Key type dns_resolver registered Jun 25 18:51:39.068830 kernel: IPI shorthand broadcast: enabled Jun 25 18:51:39.068841 kernel: sched_clock: Marking stable (1004009178, 131244016)->(1138162013, -2908819) Jun 25 18:51:39.068851 kernel: registered taskstats version 1 Jun 25 18:51:39.068861 kernel: Loading compiled-in X.509 certificates Jun 25 18:51:39.068872 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:51:39.068882 kernel: Key type .fscrypt registered Jun 25 18:51:39.068894 kernel: Key type fscrypt-provisioning registered Jun 25 18:51:39.068904 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:51:39.068914 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:51:39.068924 kernel: ima: No architecture policies found Jun 25 18:51:39.069488 kernel: clk: Disabling unused clocks Jun 25 18:51:39.069498 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:51:39.069508 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:51:39.069517 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:51:39.069530 kernel: Run /init as init process Jun 25 18:51:39.069540 kernel: with arguments: Jun 25 18:51:39.069549 kernel: /init Jun 25 18:51:39.069558 kernel: with environment: Jun 25 18:51:39.069567 kernel: HOME=/ Jun 25 18:51:39.069576 kernel: TERM=linux Jun 25 18:51:39.069586 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:51:39.069600 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:51:39.069615 systemd[1]: Detected virtualization kvm. Jun 25 18:51:39.069625 systemd[1]: Detected architecture x86-64. Jun 25 18:51:39.069635 systemd[1]: Running in initrd. Jun 25 18:51:39.069645 systemd[1]: No hostname configured, using default hostname. Jun 25 18:51:39.069655 systemd[1]: Hostname set to . Jun 25 18:51:39.069665 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:51:39.069675 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:51:39.069685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:51:39.069698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:51:39.069709 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:51:39.069720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:51:39.069730 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:51:39.069740 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:51:39.069753 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:51:39.069764 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:51:39.069776 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:51:39.069786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:51:39.069796 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:51:39.069807 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:51:39.069826 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:51:39.069838 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:51:39.069850 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:51:39.069861 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:51:39.069871 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:51:39.069882 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:51:39.069893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:51:39.069903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:51:39.069914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:51:39.069924 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:51:39.069948 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:51:39.069962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:51:39.069972 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:51:39.069982 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:51:39.069993 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:51:39.070003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:51:39.070014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:51:39.070047 systemd-journald[184]: Collecting audit messages is disabled. Jun 25 18:51:39.070076 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:51:39.070087 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:51:39.070097 systemd-journald[184]: Journal started Jun 25 18:51:39.070123 systemd-journald[184]: Runtime Journal (/run/log/journal/e012d2cc19ed4ee2b1c05abdee437ec3) is 4.9M, max 39.3M, 34.4M free. Jun 25 18:51:39.072983 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:51:39.074493 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:51:39.074980 systemd-modules-load[185]: Inserted module 'overlay' Jun 25 18:51:39.083248 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:51:39.112971 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:51:39.116427 systemd-modules-load[185]: Inserted module 'br_netfilter' Jun 25 18:51:39.129795 kernel: Bridge firewalling registered Jun 25 18:51:39.134243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:51:39.138522 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:51:39.141037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:51:39.142557 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:51:39.157174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:51:39.158733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:51:39.164214 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:51:39.166155 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:51:39.175699 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:51:39.180775 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:51:39.188999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:51:39.193212 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:51:39.197886 dracut-cmdline[215]: dracut-dracut-053 Jun 25 18:51:39.201578 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:51:39.203741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:51:39.237768 systemd-resolved[225]: Positive Trust Anchors: Jun 25 18:51:39.242092 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:51:39.242136 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:51:39.245398 systemd-resolved[225]: Defaulting to hostname 'linux'. Jun 25 18:51:39.246662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:51:39.247388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:51:39.287027 kernel: SCSI subsystem initialized Jun 25 18:51:39.299023 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:51:39.313991 kernel: iscsi: registered transport (tcp) Jun 25 18:51:39.343197 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:51:39.343316 kernel: QLogic iSCSI HBA Driver Jun 25 18:51:39.406779 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:51:39.413316 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:51:39.472324 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:51:39.472649 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:51:39.473456 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:51:39.525050 kernel: raid6: sse2x4 gen() 12493 MB/s Jun 25 18:51:39.542051 kernel: raid6: sse2x2 gen() 14304 MB/s Jun 25 18:51:39.559294 kernel: raid6: sse2x1 gen() 9499 MB/s Jun 25 18:51:39.559361 kernel: raid6: using algorithm sse2x2 gen() 14304 MB/s Jun 25 18:51:39.577205 kernel: raid6: .... xor() 8890 MB/s, rmw enabled Jun 25 18:51:39.577272 kernel: raid6: using ssse3x2 recovery algorithm Jun 25 18:51:39.606064 kernel: xor: measuring software checksum speed Jun 25 18:51:39.606166 kernel: prefetch64-sse : 18634 MB/sec Jun 25 18:51:39.609080 kernel: generic_sse : 15771 MB/sec Jun 25 18:51:39.609140 kernel: xor: using function: prefetch64-sse (18634 MB/sec) Jun 25 18:51:39.813246 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:51:39.831506 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:51:39.838097 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:51:39.889894 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jun 25 18:51:39.901104 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:51:39.914239 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:51:39.946114 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jun 25 18:51:40.015109 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:51:40.024147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:51:40.102721 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:51:40.114030 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:51:40.160506 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:51:40.165528 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:51:40.168160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:51:40.170024 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:51:40.177302 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:51:40.191975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:51:40.210012 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 25 18:51:40.233078 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jun 25 18:51:40.233214 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:51:40.233232 kernel: GPT:17805311 != 41943039 Jun 25 18:51:40.233246 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:51:40.233258 kernel: GPT:17805311 != 41943039 Jun 25 18:51:40.233269 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:51:40.233281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:51:40.217116 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:51:40.217319 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:51:40.221948 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:51:40.222497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:51:40.222640 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:51:40.223843 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:51:40.235895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:51:40.263010 kernel: libata version 3.00 loaded. Jun 25 18:51:40.272201 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (454) Jun 25 18:51:40.275973 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Jun 25 18:51:40.283973 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 18:51:40.297747 kernel: scsi host0: ata_piix Jun 25 18:51:40.297895 kernel: scsi host1: ata_piix Jun 25 18:51:40.298037 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jun 25 18:51:40.298053 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jun 25 18:51:40.301616 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:51:40.332307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:51:40.343789 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:51:40.349446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:51:40.354011 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:51:40.354654 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:51:40.363307 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:51:40.367456 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:51:40.396353 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:51:40.800075 disk-uuid[496]: Primary Header is updated. Jun 25 18:51:40.800075 disk-uuid[496]: Secondary Entries is updated. Jun 25 18:51:40.800075 disk-uuid[496]: Secondary Header is updated. Jun 25 18:51:40.975038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:51:41.010011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:51:42.062354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:51:42.064178 disk-uuid[506]: The operation has completed successfully. Jun 25 18:51:42.131965 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:51:42.132261 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:51:42.160079 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:51:42.179395 sh[520]: Success Jun 25 18:51:42.203981 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jun 25 18:51:42.290876 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:51:42.293109 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:51:42.295991 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:51:42.323973 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:51:42.324037 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:51:42.329244 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:51:42.331633 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:51:42.331661 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:51:42.343657 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:51:42.344692 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:51:42.349128 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:51:42.351807 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:51:42.360960 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:51:42.361022 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:51:42.362039 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:51:42.365948 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:51:42.377580 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:51:42.378635 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:51:42.394154 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:51:42.402348 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:51:42.474628 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:51:42.483551 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:51:42.520712 systemd-networkd[703]: lo: Link UP Jun 25 18:51:42.520721 systemd-networkd[703]: lo: Gained carrier Jun 25 18:51:42.522036 systemd-networkd[703]: Enumeration completed Jun 25 18:51:42.522118 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:51:42.523103 systemd[1]: Reached target network.target - Network. Jun 25 18:51:42.523670 systemd-networkd[703]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:51:42.523674 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:51:42.524807 systemd-networkd[703]: eth0: Link UP Jun 25 18:51:42.524811 systemd-networkd[703]: eth0: Gained carrier Jun 25 18:51:42.524818 systemd-networkd[703]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:51:42.536968 systemd-networkd[703]: eth0: DHCPv4 address 172.24.4.127/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 25 18:51:42.569645 ignition[622]: Ignition 2.19.0 Jun 25 18:51:42.569664 ignition[622]: Stage: fetch-offline Jun 25 18:51:42.569732 ignition[622]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:51:42.569745 ignition[622]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:51:42.571678 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:51:42.569869 ignition[622]: parsed url from cmdline: "" Jun 25 18:51:42.569873 ignition[622]: no config URL provided Jun 25 18:51:42.569880 ignition[622]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:51:42.569889 ignition[622]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:51:42.569895 ignition[622]: failed to fetch config: resource requires networking Jun 25 18:51:42.570158 ignition[622]: Ignition finished successfully Jun 25 18:51:42.578168 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:51:42.597319 ignition[712]: Ignition 2.19.0 Jun 25 18:51:42.597334 ignition[712]: Stage: fetch Jun 25 18:51:42.597555 ignition[712]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:51:42.597569 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:51:42.597682 ignition[712]: parsed url from cmdline: "" Jun 25 18:51:42.597686 ignition[712]: no config URL provided Jun 25 18:51:42.597692 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:51:42.597701 ignition[712]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:51:42.597852 ignition[712]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 25 18:51:42.597965 ignition[712]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 25 18:51:42.597997 ignition[712]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 25 18:51:42.863755 ignition[712]: GET result: OK Jun 25 18:51:42.863857 ignition[712]: parsing config with SHA512: 26d8c43036029ca8a13d2db000a620d98a8d8fa50075096d109349482078804fb35dca63943607e1f0702a24039f34c2c91a0f25688cb19aa8050c68af81f176 Jun 25 18:51:42.870273 unknown[712]: fetched base config from "system" Jun 25 18:51:42.870287 unknown[712]: fetched base config from "system" Jun 25 18:51:42.871550 ignition[712]: fetch: fetch complete Jun 25 18:51:42.870293 unknown[712]: fetched user config from "openstack" Jun 25 18:51:42.871565 ignition[712]: fetch: fetch passed Jun 25 18:51:42.875735 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:51:42.874079 ignition[712]: Ignition finished successfully Jun 25 18:51:42.887189 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:51:42.915865 ignition[719]: Ignition 2.19.0 Jun 25 18:51:42.915893 ignition[719]: Stage: kargs Jun 25 18:51:42.916363 ignition[719]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:51:42.916390 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:51:42.918783 ignition[719]: kargs: kargs passed Jun 25 18:51:42.918898 ignition[719]: Ignition finished successfully Jun 25 18:51:42.920205 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:51:42.926179 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:51:42.967024 ignition[726]: Ignition 2.19.0 Jun 25 18:51:42.967049 ignition[726]: Stage: disks Jun 25 18:51:42.967484 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:51:42.967511 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:51:42.974017 ignition[726]: disks: disks passed Jun 25 18:51:42.974165 ignition[726]: Ignition finished successfully Jun 25 18:51:42.976130 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:51:42.978702 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:51:42.980678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:51:42.983614 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:51:42.986436 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:51:42.988884 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:51:43.001179 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:51:43.036555 systemd-fsck[735]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 18:51:43.049197 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:51:43.059213 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:51:43.229988 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:51:43.230760 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:51:43.231998 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:51:43.239015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:51:43.241069 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:51:43.241724 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:51:43.245708 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 25 18:51:43.246295 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:51:43.246334 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:51:43.260507 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:51:43.267958 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (743) Jun 25 18:51:43.277200 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:51:43.289807 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:51:43.289836 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:51:43.289849 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:51:43.320961 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:51:43.334284 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:51:43.401427 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:51:43.411972 initrd-setup-root[779]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:51:43.419024 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:51:43.426251 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:51:43.535241 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:51:43.547117 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:51:43.552626 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:51:43.560990 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:51:43.561644 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:51:43.597239 ignition[861]: INFO : Ignition 2.19.0 Jun 25 18:51:43.597239 ignition[861]: INFO : Stage: mount Jun 25 18:51:43.597239 ignition[861]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:51:43.597239 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:51:43.601426 ignition[861]: INFO : mount: mount passed Jun 25 18:51:43.601426 ignition[861]: INFO : Ignition finished successfully Jun 25 18:51:43.602252 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:51:43.606449 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:51:44.315519 systemd-networkd[703]: eth0: Gained IPv6LL Jun 25 18:51:50.499417 coreos-metadata[745]: Jun 25 18:51:50.499 WARN failed to locate config-drive, using the metadata service API instead Jun 25 18:51:50.539718 coreos-metadata[745]: Jun 25 18:51:50.539 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 18:51:50.557002 coreos-metadata[745]: Jun 25 18:51:50.556 INFO Fetch successful Jun 25 18:51:50.558765 coreos-metadata[745]: Jun 25 18:51:50.558 INFO wrote hostname ci-4012-0-0-8-5dd8cf1e6e.novalocal to /sysroot/etc/hostname Jun 25 18:51:50.564583 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 25 18:51:50.564975 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 25 18:51:50.578240 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:51:50.620476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:51:50.698019 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (880) Jun 25 18:51:50.714015 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:51:50.714120 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:51:50.716035 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:51:50.861982 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:51:50.924843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:51:50.966663 ignition[898]: INFO : Ignition 2.19.0 Jun 25 18:51:50.966663 ignition[898]: INFO : Stage: files Jun 25 18:51:50.969587 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:51:50.969587 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:51:50.969587 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:51:50.992430 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:51:50.992430 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:51:51.058142 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:51:51.060070 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:51:51.060070 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:51:51.059507 unknown[898]: wrote ssh authorized keys file for user: core Jun 25 18:51:51.104925 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:51:51.107606 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:51:51.769066 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:51:52.079865 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:51:52.079865 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:51:52.084654 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 25 18:51:52.606575 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:51:53.037209 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:51:53.039573 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 18:51:53.365856 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:51:54.967418 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:51:54.968895 ignition[898]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 25 18:51:54.970617 ignition[898]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:51:54.970617 ignition[898]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:51:54.970617 ignition[898]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 25 18:51:54.970617 ignition[898]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:51:54.980731 ignition[898]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:51:54.980731 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:51:54.980731 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:51:54.980731 ignition[898]: INFO : files: files passed Jun 25 18:51:54.980731 ignition[898]: INFO : Ignition finished successfully Jun 25 18:51:54.973564 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:51:54.986072 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:51:54.989088 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:51:54.994010 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:51:54.994117 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:51:55.011411 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:51:55.013896 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:51:55.013896 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:51:55.015005 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:51:55.017278 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:51:55.025129 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:51:55.058145 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:51:55.058398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:51:55.060653 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:51:55.062720 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:51:55.064647 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:51:55.073208 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:51:55.096273 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:51:55.103077 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:51:55.114155 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:51:55.115536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:51:55.116179 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:51:55.117324 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:51:55.117451 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:51:55.118692 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:51:55.119424 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:51:55.120516 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:51:55.121526 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:51:55.122488 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:51:55.123621 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:51:55.124729 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:51:55.125953 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:51:55.127145 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:51:55.128309 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:51:55.129343 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:51:55.129462 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:51:55.130696 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:51:55.131408 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:51:55.132390 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:51:55.134004 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:51:55.134881 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:51:55.135057 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:51:55.136354 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:51:55.136477 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:51:55.137123 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:51:55.137231 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:51:55.147448 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:51:55.150172 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:51:55.150733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:51:55.150908 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:51:55.155087 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:51:55.155266 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:51:55.163575 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:51:55.163671 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:51:55.181959 ignition[951]: INFO : Ignition 2.19.0 Jun 25 18:51:55.181959 ignition[951]: INFO : Stage: umount Jun 25 18:51:55.181959 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:51:55.181959 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 25 18:51:55.187896 ignition[951]: INFO : umount: umount passed Jun 25 18:51:55.187896 ignition[951]: INFO : Ignition finished successfully Jun 25 18:51:55.185215 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:51:55.185319 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:51:55.189717 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:51:55.190526 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:51:55.190601 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:51:55.192310 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:51:55.192352 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:51:55.192829 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:51:55.192868 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:51:55.193367 systemd[1]: Stopped target network.target - Network. Jun 25 18:51:55.193785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:51:55.193827 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:51:55.194371 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:51:55.195328 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:51:55.199971 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:51:55.200828 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:51:55.201899 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:51:55.203148 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:51:55.203191 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:51:55.204081 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:51:55.204114 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:51:55.205030 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:51:55.205070 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:51:55.206001 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:51:55.206040 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:51:55.207192 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:51:55.208305 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:51:55.209485 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:51:55.209567 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:51:55.210697 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:51:55.210764 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:51:55.210980 systemd-networkd[703]: eth0: DHCPv6 lease lost Jun 25 18:51:55.213588 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:51:55.213676 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:51:55.215241 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:51:55.215278 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:51:55.220070 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:51:55.222048 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:51:55.222122 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:51:55.222786 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:51:55.225145 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:51:55.225242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:51:55.237309 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:51:55.238235 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:51:55.242718 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:51:55.242822 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:51:55.245246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:51:55.245301 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:51:55.246491 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:51:55.246524 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:51:55.247641 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:51:55.247685 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:51:55.249255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:51:55.249295 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:51:55.250239 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:51:55.250281 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:51:55.260107 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:51:55.261295 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:51:55.261350 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:51:55.262492 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:51:55.262537 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:51:55.264732 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:51:55.264786 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:51:55.265642 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:51:55.265686 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:51:55.266818 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:51:55.266857 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:51:55.268043 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:51:55.268084 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:51:55.269277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:51:55.269316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:51:55.270872 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:51:55.270989 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:51:55.272492 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:51:55.279334 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:51:55.287564 systemd[1]: Switching root. Jun 25 18:51:55.322980 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jun 25 18:51:55.323086 systemd-journald[184]: Journal stopped Jun 25 18:51:57.697279 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:51:57.697332 kernel: SELinux: policy capability open_perms=1 Jun 25 18:51:57.697348 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:51:57.697360 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:51:57.697374 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:51:57.697389 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:51:57.697400 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:51:57.697412 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:51:57.697424 kernel: audit: type=1403 audit(1719341516.444:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:51:57.697437 systemd[1]: Successfully loaded SELinux policy in 178.687ms. Jun 25 18:51:57.697458 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.629ms. Jun 25 18:51:57.697472 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:51:57.697484 systemd[1]: Detected virtualization kvm. Jun 25 18:51:57.697498 systemd[1]: Detected architecture x86-64. Jun 25 18:51:57.697511 systemd[1]: Detected first boot. Jun 25 18:51:57.697523 systemd[1]: Hostname set to . Jun 25 18:51:57.697535 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:51:57.697547 zram_generator::config[996]: No configuration found. Jun 25 18:51:57.697560 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:51:57.697575 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:51:57.697587 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:51:57.697601 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:51:57.697614 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:51:57.697626 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:51:57.697638 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:51:57.697650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:51:57.697662 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:51:57.697674 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:51:57.697688 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:51:57.697702 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:51:57.697714 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:51:57.697726 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:51:57.697738 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:51:57.697750 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:51:57.697762 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:51:57.697774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:51:57.697786 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:51:57.697798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:51:57.697812 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:51:57.697824 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:51:57.697836 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:51:57.697848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:51:57.697860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:51:57.697872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:51:57.697886 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:51:57.697898 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:51:57.697911 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:51:57.697922 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:51:57.697956 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:51:57.697972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:51:57.697984 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:51:57.697996 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:51:57.698008 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:51:57.698020 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:51:57.698035 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:51:57.698048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:51:57.698060 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:51:57.698072 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:51:57.698084 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:51:57.698097 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:51:57.698109 systemd[1]: Reached target machines.target - Containers. Jun 25 18:51:57.698121 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:51:57.698136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:51:57.698148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:51:57.698160 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:51:57.698172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:51:57.698184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:51:57.698196 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:51:57.698208 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:51:57.698221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:51:57.698234 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:51:57.698248 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:51:57.698260 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:51:57.698272 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:51:57.698284 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:51:57.698309 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:51:57.698321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:51:57.698333 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:51:57.698361 systemd-journald[1081]: Collecting audit messages is disabled. Jun 25 18:51:57.698390 systemd-journald[1081]: Journal started Jun 25 18:51:57.698413 systemd-journald[1081]: Runtime Journal (/run/log/journal/e012d2cc19ed4ee2b1c05abdee437ec3) is 4.9M, max 39.3M, 34.4M free. Jun 25 18:51:57.440206 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:51:57.462050 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:51:57.462432 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:51:57.701959 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:51:57.713249 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:51:57.716961 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:51:57.719357 systemd[1]: Stopped verity-setup.service. Jun 25 18:51:57.719386 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:51:57.729964 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:51:57.730206 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:51:57.731112 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:51:57.732097 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:51:57.733628 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:51:57.734313 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:51:57.735626 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:51:57.737347 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:51:57.741781 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:51:57.742005 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:51:57.744138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:51:57.745993 kernel: fuse: init (API version 7.39) Jun 25 18:51:57.750783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:51:57.751646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:51:57.751773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:51:57.752526 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:51:57.754525 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:51:57.756023 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:51:57.756227 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:51:57.775541 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:51:57.791891 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:51:57.799081 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:51:57.803982 kernel: ACPI: bus type drm_connector registered Jun 25 18:51:57.807959 kernel: loop: module loaded Jun 25 18:51:57.812084 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:51:57.823168 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:51:57.826688 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:51:57.827497 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:51:57.827643 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:51:57.828378 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:51:57.828502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:51:57.829267 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:51:57.830161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:51:57.830890 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:51:57.831577 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:51:57.837739 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:51:57.837781 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:51:57.839512 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:51:57.844107 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:51:57.846551 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:51:57.847375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:51:57.881627 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:51:57.883655 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:51:57.884306 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:51:57.886076 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:51:57.887054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:51:57.898120 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:51:57.901241 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:51:57.903564 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:51:57.906583 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:51:57.911321 systemd-journald[1081]: Time spent on flushing to /var/log/journal/e012d2cc19ed4ee2b1c05abdee437ec3 is 20.062ms for 944 entries. Jun 25 18:51:57.911321 systemd-journald[1081]: System Journal (/var/log/journal/e012d2cc19ed4ee2b1c05abdee437ec3) is 8.0M, max 584.8M, 576.8M free. Jun 25 18:51:57.976984 systemd-journald[1081]: Received client request to flush runtime journal. Jun 25 18:51:57.977033 kernel: loop0: detected capacity change from 0 to 210664 Jun 25 18:51:57.977062 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:51:57.927515 udevadm[1136]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:51:57.955838 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:51:57.960470 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:51:57.965240 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. Jun 25 18:51:57.965276 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. Jun 25 18:51:57.977307 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:51:57.981211 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:51:57.993430 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:51:57.999065 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:51:58.034037 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:51:58.035472 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:51:58.060986 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:51:58.073274 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:51:58.082332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:51:58.092956 kernel: loop1: detected capacity change from 0 to 8 Jun 25 18:51:58.107741 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jun 25 18:51:58.107765 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Jun 25 18:51:58.115152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:51:58.122966 kernel: loop2: detected capacity change from 0 to 80568 Jun 25 18:51:58.192985 kernel: loop3: detected capacity change from 0 to 139760 Jun 25 18:51:58.281960 kernel: loop4: detected capacity change from 0 to 210664 Jun 25 18:51:58.320960 kernel: loop5: detected capacity change from 0 to 8 Jun 25 18:51:58.324963 kernel: loop6: detected capacity change from 0 to 80568 Jun 25 18:51:58.389964 kernel: loop7: detected capacity change from 0 to 139760 Jun 25 18:51:58.446573 (sd-merge)[1157]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 25 18:51:58.448004 (sd-merge)[1157]: Merged extensions into '/usr'. Jun 25 18:51:58.457272 systemd[1]: Reloading requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:51:58.457301 systemd[1]: Reloading... Jun 25 18:51:58.562957 zram_generator::config[1180]: No configuration found. Jun 25 18:51:58.811836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:51:58.870148 systemd[1]: Reloading finished in 411 ms. Jun 25 18:51:58.893581 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:51:58.895112 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:51:58.904068 systemd[1]: Starting ensure-sysext.service... Jun 25 18:51:58.907074 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:51:58.912176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:51:58.922754 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:51:58.922772 systemd[1]: Reloading... Jun 25 18:51:58.931004 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:51:58.944771 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:51:58.945134 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:51:58.947882 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:51:58.949458 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jun 25 18:51:58.949531 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jun 25 18:51:58.959745 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:51:58.959759 systemd-tmpfiles[1239]: Skipping /boot Jun 25 18:51:58.970662 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jun 25 18:51:58.971879 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:51:58.971886 systemd-tmpfiles[1239]: Skipping /boot Jun 25 18:51:59.026153 zram_generator::config[1265]: No configuration found. Jun 25 18:51:59.079961 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1269) Jun 25 18:51:59.146144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1276) Jun 25 18:51:59.168958 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 18:51:59.175977 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:51:59.181000 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 18:51:59.208957 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 18:51:59.254517 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 25 18:51:59.254603 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 25 18:51:59.259161 kernel: Console: switching to colour dummy device 80x25 Jun 25 18:51:59.260491 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 25 18:51:59.260528 kernel: [drm] features: -context_init Jun 25 18:51:59.261984 kernel: [drm] number of scanouts: 1 Jun 25 18:51:59.262067 kernel: [drm] number of cap sets: 0 Jun 25 18:51:59.264950 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 25 18:51:59.272808 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 25 18:51:59.272899 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:51:59.283901 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 25 18:51:59.284156 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:51:59.310498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:51:59.382268 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:51:59.382916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:51:59.385016 systemd[1]: Reloading finished in 461 ms. Jun 25 18:51:59.399903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:51:59.402152 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:51:59.408808 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:51:59.467620 systemd[1]: Finished ensure-sysext.service. Jun 25 18:51:59.473165 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:51:59.496844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:51:59.502241 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:51:59.509229 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:51:59.511430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:51:59.518270 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:51:59.521870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:51:59.531047 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:51:59.539383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:51:59.541477 lvm[1358]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:51:59.549483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:51:59.551530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:51:59.561710 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:51:59.567990 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:51:59.576225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:51:59.581986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:51:59.594009 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:51:59.599218 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:51:59.600500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:51:59.604692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:51:59.607904 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:51:59.620174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:51:59.620350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:51:59.621426 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:51:59.621576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:51:59.623658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:51:59.624068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:51:59.626145 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:51:59.626339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:51:59.628891 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:51:59.640144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:51:59.650504 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:51:59.651249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:51:59.651350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:51:59.658986 augenrules[1389]: No rules Jun 25 18:51:59.660598 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:51:59.670120 lvm[1388]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:51:59.664378 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:51:59.684377 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:51:59.689863 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:51:59.703388 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:51:59.714278 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:51:59.731653 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:51:59.742477 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:51:59.794877 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:51:59.797583 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:51:59.802371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:51:59.837225 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:51:59.839610 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:51:59.847016 systemd-networkd[1368]: lo: Link UP Jun 25 18:51:59.847236 systemd-networkd[1368]: lo: Gained carrier Jun 25 18:51:59.849625 systemd-networkd[1368]: Enumeration completed Jun 25 18:51:59.849644 systemd-timesyncd[1373]: No network connectivity, watching for changes. Jun 25 18:51:59.851064 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:51:59.851480 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:51:59.851486 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:51:59.852705 systemd-networkd[1368]: eth0: Link UP Jun 25 18:51:59.852774 systemd-networkd[1368]: eth0: Gained carrier Jun 25 18:51:59.852830 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:51:59.859157 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:51:59.868522 systemd-resolved[1370]: Positive Trust Anchors: Jun 25 18:51:59.868825 systemd-resolved[1370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:51:59.868916 systemd-resolved[1370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:51:59.869020 systemd-networkd[1368]: eth0: DHCPv4 address 172.24.4.127/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 25 18:51:59.869876 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Jun 25 18:51:59.878660 systemd-resolved[1370]: Using system hostname 'ci-4012-0-0-8-5dd8cf1e6e.novalocal'. Jun 25 18:51:59.880221 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:51:59.881672 systemd[1]: Reached target network.target - Network. Jun 25 18:51:59.882118 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:51:59.882540 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:51:59.884118 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:51:59.886491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:51:59.888781 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:51:59.891065 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:51:59.893780 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:51:59.895996 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:51:59.896105 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:51:59.898151 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:51:59.907314 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:51:59.915323 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:51:59.926049 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:51:59.928668 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:51:59.931236 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:51:59.932754 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:51:59.934797 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:51:59.934898 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:51:59.938261 systemd-timesyncd[1373]: Contacted time server 162.159.200.1:123 (3.flatcar.pool.ntp.org). Jun 25 18:51:59.938335 systemd-timesyncd[1373]: Initial clock synchronization to Tue 2024-06-25 18:51:59.588210 UTC. Jun 25 18:51:59.940030 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:51:59.944303 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:51:59.946821 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:51:59.957068 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:51:59.963625 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:51:59.964261 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:51:59.972267 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:51:59.974435 jq[1421]: false Jun 25 18:51:59.981347 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:51:59.986892 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:51:59.991510 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:52:00.001092 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:52:00.003565 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:52:00.010694 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:52:00.011901 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:52:00.020056 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:52:00.025398 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:52:00.025583 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:52:00.032194 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:52:00.032383 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:52:00.047604 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:52:00.047225 dbus-daemon[1420]: [system] SELinux support is enabled Jun 25 18:52:00.059801 jq[1434]: true Jun 25 18:52:00.062900 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:52:00.063393 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:52:00.063610 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:52:00.072050 extend-filesystems[1424]: Found loop4 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found loop5 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found loop6 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found loop7 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found vda Jun 25 18:52:00.075803 extend-filesystems[1424]: Found vda1 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found vda2 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found vda3 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found usr Jun 25 18:52:00.075803 extend-filesystems[1424]: Found vda4 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found vda6 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found vda7 Jun 25 18:52:00.075803 extend-filesystems[1424]: Found vda9 Jun 25 18:52:00.075803 extend-filesystems[1424]: Checking size of /dev/vda9 Jun 25 18:52:00.078318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:52:00.118584 update_engine[1433]: I0625 18:52:00.094589 1433 main.cc:92] Flatcar Update Engine starting Jun 25 18:52:00.118584 update_engine[1433]: I0625 18:52:00.112138 1433 update_check_scheduler.cc:74] Next update check in 9m36s Jun 25 18:52:00.118812 tar[1440]: linux-amd64/helm Jun 25 18:52:00.078350 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:52:00.088721 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:52:00.088742 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:52:00.110199 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:52:00.129336 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:52:00.131072 jq[1450]: true Jun 25 18:52:00.133743 extend-filesystems[1424]: Resized partition /dev/vda9 Jun 25 18:52:00.142013 extend-filesystems[1461]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:52:00.152218 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jun 25 18:52:00.160049 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1267) Jun 25 18:52:00.161753 systemd-logind[1430]: New seat seat0. Jun 25 18:52:00.165419 systemd-logind[1430]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:52:00.165441 systemd-logind[1430]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:52:00.172418 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:52:00.338036 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:52:00.444323 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jun 25 18:52:00.511028 containerd[1448]: time="2024-06-25T18:52:00.510909988Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:52:00.517581 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:52:00.517581 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 3 Jun 25 18:52:00.517581 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jun 25 18:52:00.527028 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Jun 25 18:52:00.522816 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:52:00.527440 bash[1476]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:52:00.523066 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:52:00.529670 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:52:00.548258 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:52:00.562219 systemd[1]: Starting sshkeys.service... Jun 25 18:52:00.580666 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 18:52:00.591423 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 18:52:00.614687 containerd[1448]: time="2024-06-25T18:52:00.614638686Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:52:00.614807 containerd[1448]: time="2024-06-25T18:52:00.614791822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:52:00.616142 containerd[1448]: time="2024-06-25T18:52:00.616115486Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:52:00.616204 containerd[1448]: time="2024-06-25T18:52:00.616190406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:52:00.616481 containerd[1448]: time="2024-06-25T18:52:00.616460021Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:52:00.616555 containerd[1448]: time="2024-06-25T18:52:00.616541293Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:52:00.616692 containerd[1448]: time="2024-06-25T18:52:00.616675430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:52:00.616817 containerd[1448]: time="2024-06-25T18:52:00.616798655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:52:00.616873 containerd[1448]: time="2024-06-25T18:52:00.616860477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:52:00.617062 containerd[1448]: time="2024-06-25T18:52:00.617045995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:52:00.617378 containerd[1448]: time="2024-06-25T18:52:00.617357573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:52:00.617774 containerd[1448]: time="2024-06-25T18:52:00.617432099Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:52:00.617774 containerd[1448]: time="2024-06-25T18:52:00.617448942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:52:00.617774 containerd[1448]: time="2024-06-25T18:52:00.617549537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:52:00.617774 containerd[1448]: time="2024-06-25T18:52:00.617566322Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:52:00.617774 containerd[1448]: time="2024-06-25T18:52:00.617622483Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:52:00.617774 containerd[1448]: time="2024-06-25T18:52:00.617637889Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:52:00.628797 containerd[1448]: time="2024-06-25T18:52:00.628771637Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.628937342Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.628962414Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629003218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629029736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629044309Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629063269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629182355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629201736Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629218713Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629234712Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629250453Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629269126Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629284464Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:52:00.630941 containerd[1448]: time="2024-06-25T18:52:00.629299132Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629315371Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629330767Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629359977Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629375278Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629479610Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629765396Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629794263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629809860Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629833926Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629909248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629943853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629960275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629973917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631290 containerd[1448]: time="2024-06-25T18:52:00.629988499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630003358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630017892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630032742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630050724Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630201129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630221459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630237544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630252250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630266449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630284900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630299645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631615 containerd[1448]: time="2024-06-25T18:52:00.630313843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:52:00.631880 containerd[1448]: time="2024-06-25T18:52:00.630606567Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:52:00.631880 containerd[1448]: time="2024-06-25T18:52:00.630688988Z" level=info msg="Connect containerd service" Jun 25 18:52:00.631880 containerd[1448]: time="2024-06-25T18:52:00.630717298Z" level=info msg="using legacy CRI server" Jun 25 18:52:00.631880 containerd[1448]: time="2024-06-25T18:52:00.630724829Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:52:00.631880 containerd[1448]: time="2024-06-25T18:52:00.630812299Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:52:00.633045 containerd[1448]: time="2024-06-25T18:52:00.633020064Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:52:00.633186 containerd[1448]: time="2024-06-25T18:52:00.633169654Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:52:00.633268 containerd[1448]: time="2024-06-25T18:52:00.633250408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:52:00.633339 containerd[1448]: time="2024-06-25T18:52:00.633316754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:52:00.633418 containerd[1448]: time="2024-06-25T18:52:00.633400890Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:52:00.633792 containerd[1448]: time="2024-06-25T18:52:00.633772662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:52:00.634002 containerd[1448]: time="2024-06-25T18:52:00.633977225Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:52:00.634135 containerd[1448]: time="2024-06-25T18:52:00.634103679Z" level=info msg="Start subscribing containerd event" Jun 25 18:52:00.634238 containerd[1448]: time="2024-06-25T18:52:00.634222372Z" level=info msg="Start recovering state" Jun 25 18:52:00.634350 containerd[1448]: time="2024-06-25T18:52:00.634334876Z" level=info msg="Start event monitor" Jun 25 18:52:00.634410 containerd[1448]: time="2024-06-25T18:52:00.634397648Z" level=info msg="Start snapshots syncer" Jun 25 18:52:00.634464 containerd[1448]: time="2024-06-25T18:52:00.634452784Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:52:00.634515 containerd[1448]: time="2024-06-25T18:52:00.634504730Z" level=info msg="Start streaming server" Jun 25 18:52:00.634673 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:52:00.640881 containerd[1448]: time="2024-06-25T18:52:00.638221388Z" level=info msg="containerd successfully booted in 0.133760s" Jun 25 18:52:00.712301 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:52:00.740167 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:52:00.750238 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:52:00.760476 systemd[1]: Started sshd@0-172.24.4.127:22-172.24.4.1:40384.service - OpenSSH per-connection server daemon (172.24.4.1:40384). Jun 25 18:52:00.763827 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:52:00.764147 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:52:00.776422 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:52:00.799516 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:52:00.807054 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:52:00.819373 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:52:00.820242 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:52:01.022612 tar[1440]: linux-amd64/LICENSE Jun 25 18:52:01.022612 tar[1440]: linux-amd64/README.md Jun 25 18:52:01.033244 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:52:01.403318 systemd-networkd[1368]: eth0: Gained IPv6LL Jun 25 18:52:01.409123 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:52:01.419565 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:52:01.432993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:52:01.447233 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:52:01.513743 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:52:02.020835 sshd[1506]: Accepted publickey for core from 172.24.4.1 port 40384 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:02.023641 sshd[1506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:02.046733 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:52:02.058432 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:52:02.070058 systemd-logind[1430]: New session 1 of user core. Jun 25 18:52:02.102147 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:52:02.114964 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:52:02.129485 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:02.242988 systemd[1533]: Queued start job for default target default.target. Jun 25 18:52:02.255048 systemd[1533]: Created slice app.slice - User Application Slice. Jun 25 18:52:02.255239 systemd[1533]: Reached target paths.target - Paths. Jun 25 18:52:02.255317 systemd[1533]: Reached target timers.target - Timers. Jun 25 18:52:02.259125 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:52:02.268416 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:52:02.269052 systemd[1533]: Reached target sockets.target - Sockets. Jun 25 18:52:02.269069 systemd[1533]: Reached target basic.target - Basic System. Jun 25 18:52:02.269107 systemd[1533]: Reached target default.target - Main User Target. Jun 25 18:52:02.269131 systemd[1533]: Startup finished in 133ms. Jun 25 18:52:02.269332 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:52:02.276393 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:52:02.716524 systemd[1]: Started sshd@1-172.24.4.127:22-172.24.4.1:40398.service - OpenSSH per-connection server daemon (172.24.4.1:40398). Jun 25 18:52:03.022994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:52:03.044198 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:52:04.880595 sshd[1544]: Accepted publickey for core from 172.24.4.1 port 40398 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:04.884588 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:04.897592 systemd-logind[1430]: New session 2 of user core. Jun 25 18:52:04.909644 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:52:04.933791 kubelet[1551]: E0625 18:52:04.933706 1551 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:52:04.937120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:52:04.937280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:52:04.937728 systemd[1]: kubelet.service: Consumed 2.042s CPU time. Jun 25 18:52:05.630153 sshd[1544]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:05.643630 systemd[1]: sshd@1-172.24.4.127:22-172.24.4.1:40398.service: Deactivated successfully. Jun 25 18:52:05.647594 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:52:05.651329 systemd-logind[1430]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:52:05.659344 systemd[1]: Started sshd@2-172.24.4.127:22-172.24.4.1:42136.service - OpenSSH per-connection server daemon (172.24.4.1:42136). Jun 25 18:52:05.666551 systemd-logind[1430]: Removed session 2. Jun 25 18:52:05.863340 login[1514]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:52:05.863465 login[1513]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:52:05.873632 systemd-logind[1430]: New session 4 of user core. Jun 25 18:52:05.882334 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:52:05.889654 systemd-logind[1430]: New session 3 of user core. Jun 25 18:52:05.896279 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:52:07.029426 coreos-metadata[1419]: Jun 25 18:52:07.029 WARN failed to locate config-drive, using the metadata service API instead Jun 25 18:52:07.095054 coreos-metadata[1419]: Jun 25 18:52:07.094 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 25 18:52:07.235377 sshd[1566]: Accepted publickey for core from 172.24.4.1 port 42136 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:07.238444 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:07.248596 systemd-logind[1430]: New session 5 of user core. Jun 25 18:52:07.260433 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:52:07.296907 coreos-metadata[1419]: Jun 25 18:52:07.296 INFO Fetch successful Jun 25 18:52:07.296907 coreos-metadata[1419]: Jun 25 18:52:07.296 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 25 18:52:07.313238 coreos-metadata[1419]: Jun 25 18:52:07.313 INFO Fetch successful Jun 25 18:52:07.313238 coreos-metadata[1419]: Jun 25 18:52:07.313 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 25 18:52:07.330497 coreos-metadata[1419]: Jun 25 18:52:07.330 INFO Fetch successful Jun 25 18:52:07.330497 coreos-metadata[1419]: Jun 25 18:52:07.330 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 25 18:52:07.346125 coreos-metadata[1419]: Jun 25 18:52:07.345 INFO Fetch successful Jun 25 18:52:07.346125 coreos-metadata[1419]: Jun 25 18:52:07.346 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 25 18:52:07.361909 coreos-metadata[1419]: Jun 25 18:52:07.361 INFO Fetch successful Jun 25 18:52:07.361909 coreos-metadata[1419]: Jun 25 18:52:07.361 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 25 18:52:07.377334 coreos-metadata[1419]: Jun 25 18:52:07.377 INFO Fetch successful Jun 25 18:52:07.410197 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:52:07.412639 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:52:07.690800 coreos-metadata[1493]: Jun 25 18:52:07.689 WARN failed to locate config-drive, using the metadata service API instead Jun 25 18:52:07.728068 coreos-metadata[1493]: Jun 25 18:52:07.727 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 25 18:52:07.743123 coreos-metadata[1493]: Jun 25 18:52:07.743 INFO Fetch successful Jun 25 18:52:07.743123 coreos-metadata[1493]: Jun 25 18:52:07.743 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 18:52:07.756487 coreos-metadata[1493]: Jun 25 18:52:07.756 INFO Fetch successful Jun 25 18:52:07.763611 unknown[1493]: wrote ssh authorized keys file for user: core Jun 25 18:52:07.821984 update-ssh-keys[1597]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:52:07.823332 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 18:52:07.826248 systemd[1]: Finished sshkeys.service. Jun 25 18:52:07.833572 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:52:07.834335 systemd[1]: Startup finished in 1.220s (kernel) + 17.527s (initrd) + 11.562s (userspace) = 30.310s. Jun 25 18:52:08.026543 sshd[1566]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:08.032763 systemd[1]: sshd@2-172.24.4.127:22-172.24.4.1:42136.service: Deactivated successfully. Jun 25 18:52:08.035807 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:52:08.039616 systemd-logind[1430]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:52:08.041890 systemd-logind[1430]: Removed session 5. Jun 25 18:52:15.031435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:52:15.038296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:52:15.428995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:52:15.435812 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:52:15.628084 kubelet[1611]: E0625 18:52:15.627872 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:52:15.632171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:52:15.632487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:52:17.963494 systemd[1]: Started sshd@3-172.24.4.127:22-172.24.4.1:56174.service - OpenSSH per-connection server daemon (172.24.4.1:56174). Jun 25 18:52:19.526156 sshd[1620]: Accepted publickey for core from 172.24.4.1 port 56174 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:19.529041 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:19.543075 systemd-logind[1430]: New session 6 of user core. Jun 25 18:52:19.550228 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:52:20.174989 sshd[1620]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:20.189675 systemd[1]: sshd@3-172.24.4.127:22-172.24.4.1:56174.service: Deactivated successfully. Jun 25 18:52:20.192832 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:52:20.195689 systemd-logind[1430]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:52:20.204579 systemd[1]: Started sshd@4-172.24.4.127:22-172.24.4.1:56176.service - OpenSSH per-connection server daemon (172.24.4.1:56176). Jun 25 18:52:20.207977 systemd-logind[1430]: Removed session 6. Jun 25 18:52:21.515725 sshd[1627]: Accepted publickey for core from 172.24.4.1 port 56176 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:21.519165 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:21.528632 systemd-logind[1430]: New session 7 of user core. Jun 25 18:52:21.540267 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:52:22.208216 sshd[1627]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:22.218580 systemd[1]: sshd@4-172.24.4.127:22-172.24.4.1:56176.service: Deactivated successfully. Jun 25 18:52:22.221748 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:52:22.223581 systemd-logind[1430]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:52:22.236632 systemd[1]: Started sshd@5-172.24.4.127:22-172.24.4.1:56184.service - OpenSSH per-connection server daemon (172.24.4.1:56184). Jun 25 18:52:22.240829 systemd-logind[1430]: Removed session 7. Jun 25 18:52:23.673322 sshd[1634]: Accepted publickey for core from 172.24.4.1 port 56184 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:23.676922 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:23.688792 systemd-logind[1430]: New session 8 of user core. Jun 25 18:52:23.698258 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:52:24.179409 sshd[1634]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:24.194447 systemd[1]: sshd@5-172.24.4.127:22-172.24.4.1:56184.service: Deactivated successfully. Jun 25 18:52:24.198898 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:52:24.201172 systemd-logind[1430]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:52:24.224157 systemd[1]: Started sshd@6-172.24.4.127:22-172.24.4.1:56190.service - OpenSSH per-connection server daemon (172.24.4.1:56190). Jun 25 18:52:24.226765 systemd-logind[1430]: Removed session 8. Jun 25 18:52:25.608697 sshd[1641]: Accepted publickey for core from 172.24.4.1 port 56190 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:25.611429 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:25.623654 systemd-logind[1430]: New session 9 of user core. Jun 25 18:52:25.629259 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:52:25.633282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:52:25.642354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:52:26.057267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:52:26.070659 (kubelet)[1652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:52:26.163026 kubelet[1652]: E0625 18:52:26.162893 1652 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:52:26.167893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:52:26.168105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:52:26.211853 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:52:26.212578 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:52:26.235287 sudo[1658]: pam_unix(sudo:session): session closed for user root Jun 25 18:52:26.413273 sshd[1641]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:26.426293 systemd[1]: sshd@6-172.24.4.127:22-172.24.4.1:56190.service: Deactivated successfully. Jun 25 18:52:26.429557 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:52:26.433318 systemd-logind[1430]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:52:26.438562 systemd[1]: Started sshd@7-172.24.4.127:22-172.24.4.1:35796.service - OpenSSH per-connection server daemon (172.24.4.1:35796). Jun 25 18:52:26.441740 systemd-logind[1430]: Removed session 9. Jun 25 18:52:27.710506 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 35796 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:27.713210 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:27.722858 systemd-logind[1430]: New session 10 of user core. Jun 25 18:52:27.731240 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:52:28.208646 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:52:28.209325 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:52:28.216611 sudo[1669]: pam_unix(sudo:session): session closed for user root Jun 25 18:52:28.228177 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:52:28.228760 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:52:28.257505 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:52:28.263037 auditctl[1672]: No rules Jun 25 18:52:28.263764 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:52:28.264246 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:52:28.272797 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:52:28.329121 augenrules[1690]: No rules Jun 25 18:52:28.330703 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:52:28.333129 sudo[1668]: pam_unix(sudo:session): session closed for user root Jun 25 18:52:28.492844 sshd[1665]: pam_unix(sshd:session): session closed for user core Jun 25 18:52:28.505124 systemd[1]: sshd@7-172.24.4.127:22-172.24.4.1:35796.service: Deactivated successfully. Jun 25 18:52:28.509283 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:52:28.511271 systemd-logind[1430]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:52:28.519544 systemd[1]: Started sshd@8-172.24.4.127:22-172.24.4.1:35812.service - OpenSSH per-connection server daemon (172.24.4.1:35812). Jun 25 18:52:28.522167 systemd-logind[1430]: Removed session 10. Jun 25 18:52:29.866473 sshd[1698]: Accepted publickey for core from 172.24.4.1 port 35812 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:52:29.869168 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:52:29.879197 systemd-logind[1430]: New session 11 of user core. Jun 25 18:52:29.889373 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:52:30.442854 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:52:30.444269 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:52:30.744439 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:52:30.767447 (dockerd)[1711]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:52:31.360532 dockerd[1711]: time="2024-06-25T18:52:31.360342411Z" level=info msg="Starting up" Jun 25 18:52:31.401538 systemd[1]: var-lib-docker-metacopy\x2dcheck1969300202-merged.mount: Deactivated successfully. Jun 25 18:52:31.438584 dockerd[1711]: time="2024-06-25T18:52:31.438499721Z" level=info msg="Loading containers: start." Jun 25 18:52:31.594315 kernel: Initializing XFRM netlink socket Jun 25 18:52:31.717404 systemd-networkd[1368]: docker0: Link UP Jun 25 18:52:31.741125 dockerd[1711]: time="2024-06-25T18:52:31.741074844Z" level=info msg="Loading containers: done." Jun 25 18:52:31.905846 dockerd[1711]: time="2024-06-25T18:52:31.905771761Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:52:31.906200 dockerd[1711]: time="2024-06-25T18:52:31.906152300Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:52:31.906410 dockerd[1711]: time="2024-06-25T18:52:31.906373010Z" level=info msg="Daemon has completed initialization" Jun 25 18:52:31.978052 dockerd[1711]: time="2024-06-25T18:52:31.977908651Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:52:31.978899 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:52:32.385384 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck699541447-merged.mount: Deactivated successfully. Jun 25 18:52:34.102105 containerd[1448]: time="2024-06-25T18:52:34.101984691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 18:52:35.074983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817809251.mount: Deactivated successfully. Jun 25 18:52:36.281878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:52:36.293550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:52:36.463200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:52:36.468185 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:52:36.524834 kubelet[1906]: E0625 18:52:36.524043 1906 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:52:36.528672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:52:36.528865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:52:37.359452 containerd[1448]: time="2024-06-25T18:52:37.359400029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:37.361088 containerd[1448]: time="2024-06-25T18:52:37.360696121Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771809" Jun 25 18:52:37.361996 containerd[1448]: time="2024-06-25T18:52:37.361914109Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:37.376043 containerd[1448]: time="2024-06-25T18:52:37.375994553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:37.379091 containerd[1448]: time="2024-06-25T18:52:37.377732577Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 3.275662864s" Jun 25 18:52:37.379091 containerd[1448]: time="2024-06-25T18:52:37.377768522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 18:52:37.406413 containerd[1448]: time="2024-06-25T18:52:37.406355056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 18:52:40.169500 containerd[1448]: time="2024-06-25T18:52:40.169367918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:40.173050 containerd[1448]: time="2024-06-25T18:52:40.172181380Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588682" Jun 25 18:52:40.175533 containerd[1448]: time="2024-06-25T18:52:40.175403439Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:40.196613 containerd[1448]: time="2024-06-25T18:52:40.196477844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:40.200285 containerd[1448]: time="2024-06-25T18:52:40.199530662Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.793092009s" Jun 25 18:52:40.200285 containerd[1448]: time="2024-06-25T18:52:40.199617101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 18:52:40.249490 containerd[1448]: time="2024-06-25T18:52:40.248855697Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 18:52:41.841297 containerd[1448]: time="2024-06-25T18:52:41.841223497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:41.842943 containerd[1448]: time="2024-06-25T18:52:41.842586138Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778128" Jun 25 18:52:41.844676 containerd[1448]: time="2024-06-25T18:52:41.844182030Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:41.847483 containerd[1448]: time="2024-06-25T18:52:41.847442314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:41.848819 containerd[1448]: time="2024-06-25T18:52:41.848782608Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.599796912s" Jun 25 18:52:41.848873 containerd[1448]: time="2024-06-25T18:52:41.848817882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 18:52:41.873363 containerd[1448]: time="2024-06-25T18:52:41.873328486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 18:52:43.849300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916713320.mount: Deactivated successfully. Jun 25 18:52:44.526514 containerd[1448]: time="2024-06-25T18:52:44.526430989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:44.528238 containerd[1448]: time="2024-06-25T18:52:44.528204288Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035446" Jun 25 18:52:44.530633 containerd[1448]: time="2024-06-25T18:52:44.529482801Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:44.532591 containerd[1448]: time="2024-06-25T18:52:44.531761077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:44.532591 containerd[1448]: time="2024-06-25T18:52:44.532488288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 2.658988057s" Jun 25 18:52:44.532591 containerd[1448]: time="2024-06-25T18:52:44.532517838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 18:52:44.555792 containerd[1448]: time="2024-06-25T18:52:44.555755524Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:52:45.234639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450106168.mount: Deactivated successfully. Jun 25 18:52:45.538111 update_engine[1433]: I0625 18:52:45.538064 1433 update_attempter.cc:509] Updating boot flags... Jun 25 18:52:45.588017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1967) Jun 25 18:52:45.655984 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1970) Jun 25 18:52:45.991048 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1970) Jun 25 18:52:46.530861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 18:52:46.537258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:52:46.640115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:52:46.652313 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:52:47.004403 kubelet[2015]: E0625 18:52:47.003442 2015 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:52:47.009996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:52:47.010316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:52:47.017450 containerd[1448]: time="2024-06-25T18:52:47.017349591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:47.019242 containerd[1448]: time="2024-06-25T18:52:47.019132480Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jun 25 18:52:47.022275 containerd[1448]: time="2024-06-25T18:52:47.022220285Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:47.030814 containerd[1448]: time="2024-06-25T18:52:47.030732650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:47.035305 containerd[1448]: time="2024-06-25T18:52:47.034351735Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.478314076s" Jun 25 18:52:47.035305 containerd[1448]: time="2024-06-25T18:52:47.034436997Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 18:52:47.086128 containerd[1448]: time="2024-06-25T18:52:47.086007995Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:52:47.797171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267825555.mount: Deactivated successfully. Jun 25 18:52:47.814053 containerd[1448]: time="2024-06-25T18:52:47.812828156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:47.816227 containerd[1448]: time="2024-06-25T18:52:47.815590446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jun 25 18:52:47.817784 containerd[1448]: time="2024-06-25T18:52:47.817642607Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:47.824138 containerd[1448]: time="2024-06-25T18:52:47.823975505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:47.827004 containerd[1448]: time="2024-06-25T18:52:47.826061884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 739.971805ms" Jun 25 18:52:47.827004 containerd[1448]: time="2024-06-25T18:52:47.826136314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:52:47.877158 containerd[1448]: time="2024-06-25T18:52:47.877088968Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 18:52:48.549918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091294837.mount: Deactivated successfully. Jun 25 18:52:51.390748 containerd[1448]: time="2024-06-25T18:52:51.390657408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:51.392144 containerd[1448]: time="2024-06-25T18:52:51.392103915Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jun 25 18:52:51.393034 containerd[1448]: time="2024-06-25T18:52:51.392975049Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:51.396960 containerd[1448]: time="2024-06-25T18:52:51.396817922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:52:51.398382 containerd[1448]: time="2024-06-25T18:52:51.398096685Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.520927736s" Jun 25 18:52:51.398382 containerd[1448]: time="2024-06-25T18:52:51.398134991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 18:52:55.146499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:52:55.159303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:52:55.188661 systemd[1]: Reloading requested from client PID 2148 ('systemctl') (unit session-11.scope)... Jun 25 18:52:55.188681 systemd[1]: Reloading... Jun 25 18:52:55.298970 zram_generator::config[2182]: No configuration found. Jun 25 18:52:55.449261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:52:55.533139 systemd[1]: Reloading finished in 343 ms. Jun 25 18:52:55.582025 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:52:55.582093 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:52:55.582300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:52:55.584288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:52:55.750265 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:52:55.751161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:52:56.222135 kubelet[2250]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:52:56.222135 kubelet[2250]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:52:56.222135 kubelet[2250]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:52:56.226566 kubelet[2250]: I0625 18:52:56.226204 2250 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:52:57.001064 kubelet[2250]: I0625 18:52:57.000972 2250 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:52:57.001064 kubelet[2250]: I0625 18:52:57.001008 2250 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:52:57.001357 kubelet[2250]: I0625 18:52:57.001243 2250 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:52:57.028603 kubelet[2250]: I0625 18:52:57.028366 2250 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:52:57.034456 kubelet[2250]: E0625 18:52:57.034337 2250 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.051694 kubelet[2250]: I0625 18:52:57.051052 2250 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:52:57.051694 kubelet[2250]: I0625 18:52:57.051247 2250 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:52:57.051694 kubelet[2250]: I0625 18:52:57.051273 2250 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4012-0-0-8-5dd8cf1e6e.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:52:57.051694 kubelet[2250]: I0625 18:52:57.051453 2250 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:52:57.051958 kubelet[2250]: I0625 18:52:57.051463 2250 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:52:57.051958 kubelet[2250]: I0625 18:52:57.051584 2250 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:52:57.053180 kubelet[2250]: I0625 18:52:57.053125 2250 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:52:57.053180 kubelet[2250]: I0625 18:52:57.053143 2250 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:52:57.053180 kubelet[2250]: I0625 18:52:57.053166 2250 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:52:57.053395 kubelet[2250]: I0625 18:52:57.053289 2250 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:52:57.054117 kubelet[2250]: W0625 18:52:57.054014 2250 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-8-5dd8cf1e6e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.054179 kubelet[2250]: E0625 18:52:57.054148 2250 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-8-5dd8cf1e6e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.057124 kubelet[2250]: I0625 18:52:57.056952 2250 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:52:57.059772 kubelet[2250]: I0625 18:52:57.058942 2250 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:52:57.059772 kubelet[2250]: W0625 18:52:57.058998 2250 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:52:57.059772 kubelet[2250]: I0625 18:52:57.059551 2250 server.go:1264] "Started kubelet" Jun 25 18:52:57.059772 kubelet[2250]: W0625 18:52:57.059653 2250 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.059772 kubelet[2250]: E0625 18:52:57.059696 2250 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.080130 kubelet[2250]: I0625 18:52:57.080107 2250 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:52:57.092282 kubelet[2250]: E0625 18:52:57.091473 2250 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.127:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.127:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012-0-0-8-5dd8cf1e6e.novalocal.17dc540e0c318574 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012-0-0-8-5dd8cf1e6e.novalocal,UID:ci-4012-0-0-8-5dd8cf1e6e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012-0-0-8-5dd8cf1e6e.novalocal,},FirstTimestamp:2024-06-25 18:52:57.059534196 +0000 UTC m=+1.305708838,LastTimestamp:2024-06-25 18:52:57.059534196 +0000 UTC m=+1.305708838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012-0-0-8-5dd8cf1e6e.novalocal,}" Jun 25 18:52:57.094973 kubelet[2250]: I0625 18:52:57.093997 2250 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:52:57.094973 kubelet[2250]: I0625 18:52:57.094882 2250 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:52:57.097315 kubelet[2250]: I0625 18:52:57.097280 2250 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:52:57.098336 kubelet[2250]: I0625 18:52:57.098298 2250 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:52:57.098435 kubelet[2250]: I0625 18:52:57.097177 2250 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:52:57.098705 kubelet[2250]: I0625 18:52:57.098693 2250 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:52:57.098850 kubelet[2250]: I0625 18:52:57.098826 2250 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:52:57.111411 kubelet[2250]: E0625 18:52:57.111383 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-8-5dd8cf1e6e.novalocal?timeout=10s\": dial tcp 172.24.4.127:6443: connect: connection refused" interval="200ms" Jun 25 18:52:57.111860 kubelet[2250]: W0625 18:52:57.111676 2250 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.111959 kubelet[2250]: E0625 18:52:57.111947 2250 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.126625 kubelet[2250]: I0625 18:52:57.126601 2250 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:52:57.169398 kubelet[2250]: I0625 18:52:57.169148 2250 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:52:57.169655 kubelet[2250]: I0625 18:52:57.169625 2250 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:52:57.170094 kubelet[2250]: E0625 18:52:57.170076 2250 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:52:57.184223 kubelet[2250]: I0625 18:52:57.184124 2250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:52:57.186642 kubelet[2250]: I0625 18:52:57.186606 2250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:52:57.186717 kubelet[2250]: I0625 18:52:57.186666 2250 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:52:57.186717 kubelet[2250]: I0625 18:52:57.186701 2250 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:52:57.186843 kubelet[2250]: E0625 18:52:57.186780 2250 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:52:57.195673 kubelet[2250]: W0625 18:52:57.195641 2250 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.195861 kubelet[2250]: E0625 18:52:57.195848 2250 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:57.216231 kubelet[2250]: I0625 18:52:57.216193 2250 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.221010 kubelet[2250]: E0625 18:52:57.216868 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.127:6443/api/v1/nodes\": dial tcp 172.24.4.127:6443: connect: connection refused" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.221010 kubelet[2250]: I0625 18:52:57.216887 2250 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:52:57.221010 kubelet[2250]: I0625 18:52:57.216902 2250 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:52:57.221010 kubelet[2250]: I0625 18:52:57.216917 2250 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:52:57.244155 kubelet[2250]: I0625 18:52:57.244031 2250 policy_none.go:49] "None policy: Start" Jun 25 18:52:57.248017 kubelet[2250]: I0625 18:52:57.247476 2250 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:52:57.248017 kubelet[2250]: I0625 18:52:57.247527 2250 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:52:57.261851 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:52:57.285787 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:52:57.288275 kubelet[2250]: E0625 18:52:57.288207 2250 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:52:57.295970 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:52:57.310868 kubelet[2250]: I0625 18:52:57.310515 2250 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:52:57.310868 kubelet[2250]: I0625 18:52:57.310826 2250 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:52:57.311174 kubelet[2250]: I0625 18:52:57.311075 2250 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:52:57.314308 kubelet[2250]: E0625 18:52:57.312909 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-8-5dd8cf1e6e.novalocal?timeout=10s\": dial tcp 172.24.4.127:6443: connect: connection refused" interval="400ms" Jun 25 18:52:57.314308 kubelet[2250]: E0625 18:52:57.313905 2250 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" not found" Jun 25 18:52:57.420804 kubelet[2250]: I0625 18:52:57.420747 2250 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.421704 kubelet[2250]: E0625 18:52:57.421638 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.127:6443/api/v1/nodes\": dial tcp 172.24.4.127:6443: connect: connection refused" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.488898 kubelet[2250]: I0625 18:52:57.488834 2250 topology_manager.go:215] "Topology Admit Handler" podUID="98c01be137005d7bfbc69675fc1e2994" podNamespace="kube-system" podName="kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.492197 kubelet[2250]: I0625 18:52:57.491914 2250 topology_manager.go:215] "Topology Admit Handler" podUID="ce494960f443df246931880db9622448" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.495474 kubelet[2250]: I0625 18:52:57.495428 2250 topology_manager.go:215] "Topology Admit Handler" podUID="3dd9a2a0202433a9cfccca4ae1e10259" podNamespace="kube-system" podName="kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.503768 kubelet[2250]: I0625 18:52:57.503169 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.503768 kubelet[2250]: I0625 18:52:57.503272 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-kubeconfig\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.503768 kubelet[2250]: I0625 18:52:57.503332 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.503768 kubelet[2250]: I0625 18:52:57.503382 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3dd9a2a0202433a9cfccca4ae1e10259-kubeconfig\") pod \"kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"3dd9a2a0202433a9cfccca4ae1e10259\") " pod="kube-system/kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.504224 kubelet[2250]: I0625 18:52:57.503425 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98c01be137005d7bfbc69675fc1e2994-ca-certs\") pod \"kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"98c01be137005d7bfbc69675fc1e2994\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.504224 kubelet[2250]: I0625 18:52:57.503470 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98c01be137005d7bfbc69675fc1e2994-k8s-certs\") pod \"kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"98c01be137005d7bfbc69675fc1e2994\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.504224 kubelet[2250]: I0625 18:52:57.503515 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98c01be137005d7bfbc69675fc1e2994-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"98c01be137005d7bfbc69675fc1e2994\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.504224 kubelet[2250]: I0625 18:52:57.503558 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-ca-certs\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.504224 kubelet[2250]: I0625 18:52:57.503602 2250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-k8s-certs\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.513491 systemd[1]: Created slice kubepods-burstable-pod98c01be137005d7bfbc69675fc1e2994.slice - libcontainer container kubepods-burstable-pod98c01be137005d7bfbc69675fc1e2994.slice. Jun 25 18:52:57.547970 systemd[1]: Created slice kubepods-burstable-podce494960f443df246931880db9622448.slice - libcontainer container kubepods-burstable-podce494960f443df246931880db9622448.slice. Jun 25 18:52:57.568578 systemd[1]: Created slice kubepods-burstable-pod3dd9a2a0202433a9cfccca4ae1e10259.slice - libcontainer container kubepods-burstable-pod3dd9a2a0202433a9cfccca4ae1e10259.slice. Jun 25 18:52:57.714151 kubelet[2250]: E0625 18:52:57.714043 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-8-5dd8cf1e6e.novalocal?timeout=10s\": dial tcp 172.24.4.127:6443: connect: connection refused" interval="800ms" Jun 25 18:52:57.825072 kubelet[2250]: I0625 18:52:57.825006 2250 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.825661 kubelet[2250]: E0625 18:52:57.825526 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.127:6443/api/v1/nodes\": dial tcp 172.24.4.127:6443: connect: connection refused" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:57.841405 containerd[1448]: time="2024-06-25T18:52:57.841113073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal,Uid:98c01be137005d7bfbc69675fc1e2994,Namespace:kube-system,Attempt:0,}" Jun 25 18:52:57.855257 containerd[1448]: time="2024-06-25T18:52:57.854534520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal,Uid:ce494960f443df246931880db9622448,Namespace:kube-system,Attempt:0,}" Jun 25 18:52:57.874011 containerd[1448]: time="2024-06-25T18:52:57.873787292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal,Uid:3dd9a2a0202433a9cfccca4ae1e10259,Namespace:kube-system,Attempt:0,}" Jun 25 18:52:58.031194 kubelet[2250]: W0625 18:52:58.031013 2250 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:58.031194 kubelet[2250]: E0625 18:52:58.031140 2250 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:58.459971 kubelet[2250]: W0625 18:52:58.458799 2250 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:58.459971 kubelet[2250]: E0625 18:52:58.458992 2250 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:58.501690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337891576.mount: Deactivated successfully. Jun 25 18:52:58.512164 containerd[1448]: time="2024-06-25T18:52:58.511996244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:52:58.515311 containerd[1448]: time="2024-06-25T18:52:58.515154513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:52:58.515862 kubelet[2250]: E0625 18:52:58.515778 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-0-0-8-5dd8cf1e6e.novalocal?timeout=10s\": dial tcp 172.24.4.127:6443: connect: connection refused" interval="1.6s" Jun 25 18:52:58.516995 containerd[1448]: time="2024-06-25T18:52:58.516802533Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:52:58.519356 containerd[1448]: time="2024-06-25T18:52:58.519267733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:52:58.521129 containerd[1448]: time="2024-06-25T18:52:58.520824995Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:52:58.523112 containerd[1448]: time="2024-06-25T18:52:58.522861375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 25 18:52:58.527776 containerd[1448]: time="2024-06-25T18:52:58.527680388Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:52:58.534709 containerd[1448]: time="2024-06-25T18:52:58.534103084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.150577ms" Jun 25 18:52:58.537513 containerd[1448]: time="2024-06-25T18:52:58.536215242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:52:58.542450 containerd[1448]: time="2024-06-25T18:52:58.542254306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 687.528673ms" Jun 25 18:52:58.545693 containerd[1448]: time="2024-06-25T18:52:58.545195360Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 703.867545ms" Jun 25 18:52:58.574581 kubelet[2250]: W0625 18:52:58.574420 2250 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:58.574581 kubelet[2250]: E0625 18:52:58.574542 2250 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:58.577367 kubelet[2250]: W0625 18:52:58.577246 2250 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-8-5dd8cf1e6e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:58.577501 kubelet[2250]: E0625 18:52:58.577383 2250 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-0-0-8-5dd8cf1e6e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:52:58.629658 kubelet[2250]: I0625 18:52:58.629611 2250 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:58.630844 kubelet[2250]: E0625 18:52:58.630789 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.127:6443/api/v1/nodes\": dial tcp 172.24.4.127:6443: connect: connection refused" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:52:58.769225 containerd[1448]: time="2024-06-25T18:52:58.764497191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:52:58.769225 containerd[1448]: time="2024-06-25T18:52:58.764550174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:52:58.769225 containerd[1448]: time="2024-06-25T18:52:58.764567298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:52:58.769225 containerd[1448]: time="2024-06-25T18:52:58.764580283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:52:58.782883 containerd[1448]: time="2024-06-25T18:52:58.782704088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:52:58.784821 containerd[1448]: time="2024-06-25T18:52:58.782987183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:52:58.784821 containerd[1448]: time="2024-06-25T18:52:58.783890082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:52:58.784821 containerd[1448]: time="2024-06-25T18:52:58.783911955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:52:58.793892 containerd[1448]: time="2024-06-25T18:52:58.792168574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:52:58.794130 containerd[1448]: time="2024-06-25T18:52:58.793912021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:52:58.794374 containerd[1448]: time="2024-06-25T18:52:58.794282356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:52:58.794458 containerd[1448]: time="2024-06-25T18:52:58.794396148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:52:58.796504 systemd[1]: Started cri-containerd-a1434fde5a28700e13c80cd52c97f733ab1f0889a09ee5c933af402751821cd7.scope - libcontainer container a1434fde5a28700e13c80cd52c97f733ab1f0889a09ee5c933af402751821cd7. Jun 25 18:52:58.818107 systemd[1]: Started cri-containerd-1593b4b9aa4864aff7f60fade287e9166960fec9984abf149a073731b4b5fbc6.scope - libcontainer container 1593b4b9aa4864aff7f60fade287e9166960fec9984abf149a073731b4b5fbc6. Jun 25 18:52:58.823381 systemd[1]: Started cri-containerd-d7c0971fb5e84192fd0a07cab98f1622fa5382996a7fe1874757c154279ebd81.scope - libcontainer container d7c0971fb5e84192fd0a07cab98f1622fa5382996a7fe1874757c154279ebd81. Jun 25 18:52:58.886581 containerd[1448]: time="2024-06-25T18:52:58.884188551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal,Uid:3dd9a2a0202433a9cfccca4ae1e10259,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1434fde5a28700e13c80cd52c97f733ab1f0889a09ee5c933af402751821cd7\"" Jun 25 18:52:58.891064 containerd[1448]: time="2024-06-25T18:52:58.890563874Z" level=info msg="CreateContainer within sandbox \"a1434fde5a28700e13c80cd52c97f733ab1f0889a09ee5c933af402751821cd7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:52:58.925285 containerd[1448]: time="2024-06-25T18:52:58.925155757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal,Uid:ce494960f443df246931880db9622448,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7c0971fb5e84192fd0a07cab98f1622fa5382996a7fe1874757c154279ebd81\"" Jun 25 18:52:58.929247 containerd[1448]: time="2024-06-25T18:52:58.929217637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal,Uid:98c01be137005d7bfbc69675fc1e2994,Namespace:kube-system,Attempt:0,} returns sandbox id \"1593b4b9aa4864aff7f60fade287e9166960fec9984abf149a073731b4b5fbc6\"" Jun 25 18:52:58.931084 containerd[1448]: time="2024-06-25T18:52:58.931046460Z" level=info msg="CreateContainer within sandbox \"d7c0971fb5e84192fd0a07cab98f1622fa5382996a7fe1874757c154279ebd81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:52:58.931725 containerd[1448]: time="2024-06-25T18:52:58.931697577Z" level=info msg="CreateContainer within sandbox \"a1434fde5a28700e13c80cd52c97f733ab1f0889a09ee5c933af402751821cd7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42698c80ec3fabd5f7536c6992a78bc0b80c9ffececaed483ac273f2fbd3aa90\"" Jun 25 18:52:58.932606 containerd[1448]: time="2024-06-25T18:52:58.932583343Z" level=info msg="StartContainer for \"42698c80ec3fabd5f7536c6992a78bc0b80c9ffececaed483ac273f2fbd3aa90\"" Jun 25 18:52:58.933835 containerd[1448]: time="2024-06-25T18:52:58.933798173Z" level=info msg="CreateContainer within sandbox \"1593b4b9aa4864aff7f60fade287e9166960fec9984abf149a073731b4b5fbc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:52:58.961555 containerd[1448]: time="2024-06-25T18:52:58.961512340Z" level=info msg="CreateContainer within sandbox \"d7c0971fb5e84192fd0a07cab98f1622fa5382996a7fe1874757c154279ebd81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8083271eb8d42e942fd88b5ee37591ffb308ce2b6d6a09e76ffb5eec1ae325bb\"" Jun 25 18:52:58.962755 containerd[1448]: time="2024-06-25T18:52:58.962728122Z" level=info msg="StartContainer for \"8083271eb8d42e942fd88b5ee37591ffb308ce2b6d6a09e76ffb5eec1ae325bb\"" Jun 25 18:52:58.966161 systemd[1]: Started cri-containerd-42698c80ec3fabd5f7536c6992a78bc0b80c9ffececaed483ac273f2fbd3aa90.scope - libcontainer container 42698c80ec3fabd5f7536c6992a78bc0b80c9ffececaed483ac273f2fbd3aa90. Jun 25 18:52:58.974666 containerd[1448]: time="2024-06-25T18:52:58.974622980Z" level=info msg="CreateContainer within sandbox \"1593b4b9aa4864aff7f60fade287e9166960fec9984abf149a073731b4b5fbc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d8764541557492d27d96915bcafebe18041301752fd16488f4cbef2fc58fcf41\"" Jun 25 18:52:58.975390 containerd[1448]: time="2024-06-25T18:52:58.975364704Z" level=info msg="StartContainer for \"d8764541557492d27d96915bcafebe18041301752fd16488f4cbef2fc58fcf41\"" Jun 25 18:52:59.007666 systemd[1]: Started cri-containerd-8083271eb8d42e942fd88b5ee37591ffb308ce2b6d6a09e76ffb5eec1ae325bb.scope - libcontainer container 8083271eb8d42e942fd88b5ee37591ffb308ce2b6d6a09e76ffb5eec1ae325bb. Jun 25 18:52:59.023270 systemd[1]: Started cri-containerd-d8764541557492d27d96915bcafebe18041301752fd16488f4cbef2fc58fcf41.scope - libcontainer container d8764541557492d27d96915bcafebe18041301752fd16488f4cbef2fc58fcf41. Jun 25 18:52:59.043253 containerd[1448]: time="2024-06-25T18:52:59.042975254Z" level=info msg="StartContainer for \"42698c80ec3fabd5f7536c6992a78bc0b80c9ffececaed483ac273f2fbd3aa90\" returns successfully" Jun 25 18:52:59.082907 containerd[1448]: time="2024-06-25T18:52:59.082648928Z" level=info msg="StartContainer for \"8083271eb8d42e942fd88b5ee37591ffb308ce2b6d6a09e76ffb5eec1ae325bb\" returns successfully" Jun 25 18:52:59.108350 containerd[1448]: time="2024-06-25T18:52:59.108299740Z" level=info msg="StartContainer for \"d8764541557492d27d96915bcafebe18041301752fd16488f4cbef2fc58fcf41\" returns successfully" Jun 25 18:52:59.188122 kubelet[2250]: E0625 18:52:59.188055 2250 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.127:6443: connect: connection refused Jun 25 18:53:00.233824 kubelet[2250]: I0625 18:53:00.233782 2250 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:01.241452 kubelet[2250]: E0625 18:53:01.241393 2250 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" not found" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:01.283891 kubelet[2250]: E0625 18:53:01.283754 2250 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4012-0-0-8-5dd8cf1e6e.novalocal.17dc540e0c318574 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012-0-0-8-5dd8cf1e6e.novalocal,UID:ci-4012-0-0-8-5dd8cf1e6e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012-0-0-8-5dd8cf1e6e.novalocal,},FirstTimestamp:2024-06-25 18:52:57.059534196 +0000 UTC m=+1.305708838,LastTimestamp:2024-06-25 18:52:57.059534196 +0000 UTC m=+1.305708838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012-0-0-8-5dd8cf1e6e.novalocal,}" Jun 25 18:53:01.347150 kubelet[2250]: E0625 18:53:01.346888 2250 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4012-0-0-8-5dd8cf1e6e.novalocal.17dc540e12c80bc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012-0-0-8-5dd8cf1e6e.novalocal,UID:ci-4012-0-0-8-5dd8cf1e6e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4012-0-0-8-5dd8cf1e6e.novalocal,},FirstTimestamp:2024-06-25 18:52:57.170062281 +0000 UTC m=+1.416236923,LastTimestamp:2024-06-25 18:52:57.170062281 +0000 UTC m=+1.416236923,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012-0-0-8-5dd8cf1e6e.novalocal,}" Jun 25 18:53:01.349693 kubelet[2250]: I0625 18:53:01.349638 2250 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:02.057641 kubelet[2250]: I0625 18:53:02.057590 2250 apiserver.go:52] "Watching apiserver" Jun 25 18:53:02.099202 kubelet[2250]: I0625 18:53:02.099085 2250 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:53:03.786258 systemd[1]: Reloading requested from client PID 2519 ('systemctl') (unit session-11.scope)... Jun 25 18:53:03.786291 systemd[1]: Reloading... Jun 25 18:53:03.903958 zram_generator::config[2556]: No configuration found. Jun 25 18:53:04.046189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:53:04.147523 systemd[1]: Reloading finished in 359 ms. Jun 25 18:53:04.193407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:53:04.202571 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:53:04.202890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:53:04.202956 systemd[1]: kubelet.service: Consumed 1.533s CPU time, 112.0M memory peak, 0B memory swap peak. Jun 25 18:53:04.210096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:53:04.508218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:53:04.522401 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:53:04.592070 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:53:04.593024 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:53:04.593024 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:53:04.593024 kubelet[2620]: I0625 18:53:04.592500 2620 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:53:04.615383 kubelet[2620]: I0625 18:53:04.615317 2620 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:53:04.615383 kubelet[2620]: I0625 18:53:04.615340 2620 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:53:04.615678 kubelet[2620]: I0625 18:53:04.615521 2620 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:53:04.676747 kubelet[2620]: I0625 18:53:04.674879 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:53:04.677532 kubelet[2620]: I0625 18:53:04.677482 2620 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:53:04.683477 kubelet[2620]: I0625 18:53:04.683439 2620 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:53:04.683780 kubelet[2620]: I0625 18:53:04.683610 2620 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:53:04.683884 kubelet[2620]: I0625 18:53:04.683646 2620 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4012-0-0-8-5dd8cf1e6e.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:53:04.683884 kubelet[2620]: I0625 18:53:04.683877 2620 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:53:04.683884 kubelet[2620]: I0625 18:53:04.683889 2620 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:53:04.684258 kubelet[2620]: I0625 18:53:04.683923 2620 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:53:04.684258 kubelet[2620]: I0625 18:53:04.684036 2620 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:53:04.684258 kubelet[2620]: I0625 18:53:04.684049 2620 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:53:04.684258 kubelet[2620]: I0625 18:53:04.684069 2620 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:53:04.684258 kubelet[2620]: I0625 18:53:04.684085 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:53:04.688859 kubelet[2620]: I0625 18:53:04.688105 2620 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:53:04.691321 kubelet[2620]: I0625 18:53:04.691290 2620 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:53:04.694968 kubelet[2620]: I0625 18:53:04.691893 2620 server.go:1264] "Started kubelet" Jun 25 18:53:04.694968 kubelet[2620]: I0625 18:53:04.694336 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:53:04.701002 kubelet[2620]: E0625 18:53:04.699762 2620 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:53:04.701002 kubelet[2620]: I0625 18:53:04.699887 2620 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:53:04.701269 kubelet[2620]: I0625 18:53:04.701111 2620 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:53:04.705987 kubelet[2620]: I0625 18:53:04.704841 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:53:04.705987 kubelet[2620]: I0625 18:53:04.705044 2620 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:53:04.709637 kubelet[2620]: I0625 18:53:04.709566 2620 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:53:04.710853 kubelet[2620]: I0625 18:53:04.710081 2620 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:53:04.713079 kubelet[2620]: I0625 18:53:04.712086 2620 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:53:04.719208 kubelet[2620]: I0625 18:53:04.719141 2620 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:53:04.719831 kubelet[2620]: I0625 18:53:04.719650 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:53:04.732414 kubelet[2620]: I0625 18:53:04.732334 2620 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:53:04.740227 kubelet[2620]: I0625 18:53:04.740149 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:53:04.763542 kubelet[2620]: I0625 18:53:04.762394 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:53:04.763542 kubelet[2620]: I0625 18:53:04.762430 2620 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:53:04.763542 kubelet[2620]: I0625 18:53:04.762464 2620 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:53:04.763941 kubelet[2620]: E0625 18:53:04.762514 2620 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:53:04.816961 kubelet[2620]: I0625 18:53:04.816648 2620 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.840430 kubelet[2620]: I0625 18:53:04.840224 2620 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.840430 kubelet[2620]: I0625 18:53:04.840316 2620 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.845841 kubelet[2620]: I0625 18:53:04.845139 2620 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:53:04.845841 kubelet[2620]: I0625 18:53:04.845221 2620 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:53:04.845841 kubelet[2620]: I0625 18:53:04.845241 2620 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:53:04.845841 kubelet[2620]: I0625 18:53:04.845404 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:53:04.845841 kubelet[2620]: I0625 18:53:04.845415 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:53:04.845841 kubelet[2620]: I0625 18:53:04.845434 2620 policy_none.go:49] "None policy: Start" Jun 25 18:53:04.847545 kubelet[2620]: I0625 18:53:04.846843 2620 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:53:04.847545 kubelet[2620]: I0625 18:53:04.846867 2620 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:53:04.847987 kubelet[2620]: I0625 18:53:04.847840 2620 state_mem.go:75] "Updated machine memory state" Jun 25 18:53:04.859706 kubelet[2620]: I0625 18:53:04.858430 2620 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:53:04.859706 kubelet[2620]: I0625 18:53:04.858748 2620 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:53:04.859706 kubelet[2620]: I0625 18:53:04.858832 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:53:04.866097 kubelet[2620]: I0625 18:53:04.863856 2620 topology_manager.go:215] "Topology Admit Handler" podUID="98c01be137005d7bfbc69675fc1e2994" podNamespace="kube-system" podName="kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.866860 kubelet[2620]: I0625 18:53:04.863923 2620 topology_manager.go:215] "Topology Admit Handler" podUID="ce494960f443df246931880db9622448" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.867388 kubelet[2620]: I0625 18:53:04.867177 2620 topology_manager.go:215] "Topology Admit Handler" podUID="3dd9a2a0202433a9cfccca4ae1e10259" podNamespace="kube-system" podName="kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.896736 sudo[2650]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 25 18:53:04.897109 sudo[2650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jun 25 18:53:04.901147 kubelet[2620]: W0625 18:53:04.900031 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:53:04.903606 kubelet[2620]: W0625 18:53:04.903587 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:53:04.904083 kubelet[2620]: W0625 18:53:04.903812 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:53:04.915776 kubelet[2620]: I0625 18:53:04.915738 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98c01be137005d7bfbc69675fc1e2994-ca-certs\") pod \"kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"98c01be137005d7bfbc69675fc1e2994\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.916326 kubelet[2620]: I0625 18:53:04.916285 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98c01be137005d7bfbc69675fc1e2994-k8s-certs\") pod \"kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"98c01be137005d7bfbc69675fc1e2994\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.916422 kubelet[2620]: I0625 18:53:04.916334 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.916422 kubelet[2620]: I0625 18:53:04.916362 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-k8s-certs\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.916422 kubelet[2620]: I0625 18:53:04.916404 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3dd9a2a0202433a9cfccca4ae1e10259-kubeconfig\") pod \"kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"3dd9a2a0202433a9cfccca4ae1e10259\") " pod="kube-system/kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.916556 kubelet[2620]: I0625 18:53:04.916430 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98c01be137005d7bfbc69675fc1e2994-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"98c01be137005d7bfbc69675fc1e2994\") " pod="kube-system/kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.916556 kubelet[2620]: I0625 18:53:04.916453 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-ca-certs\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.916556 kubelet[2620]: I0625 18:53:04.916473 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-kubeconfig\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:04.916556 kubelet[2620]: I0625 18:53:04.916495 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce494960f443df246931880db9622448-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" (UID: \"ce494960f443df246931880db9622448\") " pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:05.685468 kubelet[2620]: I0625 18:53:05.685395 2620 apiserver.go:52] "Watching apiserver" Jun 25 18:53:05.710654 kubelet[2620]: I0625 18:53:05.710464 2620 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:53:05.745089 sudo[2650]: pam_unix(sudo:session): session closed for user root Jun 25 18:53:05.815877 kubelet[2620]: W0625 18:53:05.815798 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:53:05.816543 kubelet[2620]: E0625 18:53:05.816026 2620 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" Jun 25 18:53:05.836564 kubelet[2620]: I0625 18:53:05.836426 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012-0-0-8-5dd8cf1e6e.novalocal" podStartSLOduration=1.836410055 podStartE2EDuration="1.836410055s" podCreationTimestamp="2024-06-25 18:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:53:05.834273919 +0000 UTC m=+1.302683031" watchObservedRunningTime="2024-06-25 18:53:05.836410055 +0000 UTC m=+1.304819157" Jun 25 18:53:05.846366 kubelet[2620]: I0625 18:53:05.845920 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012-0-0-8-5dd8cf1e6e.novalocal" podStartSLOduration=1.845903209 podStartE2EDuration="1.845903209s" podCreationTimestamp="2024-06-25 18:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:53:05.845447855 +0000 UTC m=+1.313856967" watchObservedRunningTime="2024-06-25 18:53:05.845903209 +0000 UTC m=+1.314312311" Jun 25 18:53:08.701160 sudo[1701]: pam_unix(sudo:session): session closed for user root Jun 25 18:53:08.910721 sshd[1698]: pam_unix(sshd:session): session closed for user core Jun 25 18:53:08.917210 systemd[1]: sshd@8-172.24.4.127:22-172.24.4.1:35812.service: Deactivated successfully. Jun 25 18:53:08.921710 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:53:08.922370 systemd[1]: session-11.scope: Consumed 7.588s CPU time, 137.1M memory peak, 0B memory swap peak. Jun 25 18:53:08.925382 systemd-logind[1430]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:53:08.928720 systemd-logind[1430]: Removed session 11. Jun 25 18:53:10.835972 kubelet[2620]: I0625 18:53:10.835450 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012-0-0-8-5dd8cf1e6e.novalocal" podStartSLOduration=6.8353868250000005 podStartE2EDuration="6.835386825s" podCreationTimestamp="2024-06-25 18:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:53:05.855849362 +0000 UTC m=+1.324258474" watchObservedRunningTime="2024-06-25 18:53:10.835386825 +0000 UTC m=+6.303795977" Jun 25 18:53:18.108882 kubelet[2620]: I0625 18:53:18.108855 2620 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:53:18.109727 kubelet[2620]: I0625 18:53:18.109353 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:53:18.109776 containerd[1448]: time="2024-06-25T18:53:18.109186966Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:53:18.867153 kubelet[2620]: I0625 18:53:18.867059 2620 topology_manager.go:215] "Topology Admit Handler" podUID="a571a20a-5c2c-46fc-9c7b-b23f7474aff2" podNamespace="kube-system" podName="kube-proxy-f4pkc" Jun 25 18:53:18.880610 systemd[1]: Created slice kubepods-besteffort-poda571a20a_5c2c_46fc_9c7b_b23f7474aff2.slice - libcontainer container kubepods-besteffort-poda571a20a_5c2c_46fc_9c7b_b23f7474aff2.slice. Jun 25 18:53:18.891386 kubelet[2620]: I0625 18:53:18.890708 2620 topology_manager.go:215] "Topology Admit Handler" podUID="225bfc45-5212-4b33-86be-ccbb0aca6df4" podNamespace="kube-system" podName="cilium-97lh9" Jun 25 18:53:18.896782 kubelet[2620]: W0625 18:53:18.896208 2620 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4012-0-0-8-5dd8cf1e6e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-8-5dd8cf1e6e.novalocal' and this object Jun 25 18:53:18.896782 kubelet[2620]: E0625 18:53:18.896254 2620 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4012-0-0-8-5dd8cf1e6e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-8-5dd8cf1e6e.novalocal' and this object Jun 25 18:53:18.896782 kubelet[2620]: W0625 18:53:18.896324 2620 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4012-0-0-8-5dd8cf1e6e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-8-5dd8cf1e6e.novalocal' and this object Jun 25 18:53:18.896782 kubelet[2620]: E0625 18:53:18.896338 2620 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4012-0-0-8-5dd8cf1e6e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-8-5dd8cf1e6e.novalocal' and this object Jun 25 18:53:18.896782 kubelet[2620]: W0625 18:53:18.896366 2620 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4012-0-0-8-5dd8cf1e6e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-8-5dd8cf1e6e.novalocal' and this object Jun 25 18:53:18.897064 kubelet[2620]: E0625 18:53:18.896384 2620 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4012-0-0-8-5dd8cf1e6e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4012-0-0-8-5dd8cf1e6e.novalocal' and this object Jun 25 18:53:18.903605 systemd[1]: Created slice kubepods-burstable-pod225bfc45_5212_4b33_86be_ccbb0aca6df4.slice - libcontainer container kubepods-burstable-pod225bfc45_5212_4b33_86be_ccbb0aca6df4.slice. Jun 25 18:53:18.913761 kubelet[2620]: I0625 18:53:18.913718 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a571a20a-5c2c-46fc-9c7b-b23f7474aff2-kube-proxy\") pod \"kube-proxy-f4pkc\" (UID: \"a571a20a-5c2c-46fc-9c7b-b23f7474aff2\") " pod="kube-system/kube-proxy-f4pkc" Jun 25 18:53:18.913907 kubelet[2620]: I0625 18:53:18.913824 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-hostproc\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.913907 kubelet[2620]: I0625 18:53:18.913885 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-host-proc-sys-kernel\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.913992 kubelet[2620]: I0625 18:53:18.913911 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-xtables-lock\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914031 kubelet[2620]: I0625 18:53:18.914000 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a571a20a-5c2c-46fc-9c7b-b23f7474aff2-xtables-lock\") pod \"kube-proxy-f4pkc\" (UID: \"a571a20a-5c2c-46fc-9c7b-b23f7474aff2\") " pod="kube-system/kube-proxy-f4pkc" Jun 25 18:53:18.914031 kubelet[2620]: I0625 18:53:18.914019 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-etc-cni-netd\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914089 kubelet[2620]: I0625 18:53:18.914038 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-run\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914089 kubelet[2620]: I0625 18:53:18.914056 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cni-path\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914089 kubelet[2620]: I0625 18:53:18.914074 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/225bfc45-5212-4b33-86be-ccbb0aca6df4-clustermesh-secrets\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914173 kubelet[2620]: I0625 18:53:18.914095 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-host-proc-sys-net\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914173 kubelet[2620]: I0625 18:53:18.914114 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a571a20a-5c2c-46fc-9c7b-b23f7474aff2-lib-modules\") pod \"kube-proxy-f4pkc\" (UID: \"a571a20a-5c2c-46fc-9c7b-b23f7474aff2\") " pod="kube-system/kube-proxy-f4pkc" Jun 25 18:53:18.914173 kubelet[2620]: I0625 18:53:18.914140 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/225bfc45-5212-4b33-86be-ccbb0aca6df4-hubble-tls\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914173 kubelet[2620]: I0625 18:53:18.914159 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-cgroup\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914273 kubelet[2620]: I0625 18:53:18.914176 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-lib-modules\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914273 kubelet[2620]: I0625 18:53:18.914202 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4h7k\" (UniqueName: \"kubernetes.io/projected/a571a20a-5c2c-46fc-9c7b-b23f7474aff2-kube-api-access-r4h7k\") pod \"kube-proxy-f4pkc\" (UID: \"a571a20a-5c2c-46fc-9c7b-b23f7474aff2\") " pod="kube-system/kube-proxy-f4pkc" Jun 25 18:53:18.914273 kubelet[2620]: I0625 18:53:18.914220 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-bpf-maps\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914273 kubelet[2620]: I0625 18:53:18.914238 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-config-path\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:18.914273 kubelet[2620]: I0625 18:53:18.914256 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fsh\" (UniqueName: \"kubernetes.io/projected/225bfc45-5212-4b33-86be-ccbb0aca6df4-kube-api-access-49fsh\") pod \"cilium-97lh9\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " pod="kube-system/cilium-97lh9" Jun 25 18:53:19.191461 containerd[1448]: time="2024-06-25T18:53:19.187999706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4pkc,Uid:a571a20a-5c2c-46fc-9c7b-b23f7474aff2,Namespace:kube-system,Attempt:0,}" Jun 25 18:53:19.263132 containerd[1448]: time="2024-06-25T18:53:19.260863424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:53:19.264145 containerd[1448]: time="2024-06-25T18:53:19.263264236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:19.264145 containerd[1448]: time="2024-06-25T18:53:19.263375179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:53:19.264145 containerd[1448]: time="2024-06-25T18:53:19.263992465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:19.283629 kubelet[2620]: I0625 18:53:19.283533 2620 topology_manager.go:215] "Topology Admit Handler" podUID="c5a152c9-d3fc-48af-aec7-8affbd9e2cf3" podNamespace="kube-system" podName="cilium-operator-599987898-l4v5g" Jun 25 18:53:19.316308 systemd[1]: Started cri-containerd-b14b343a3f36122c53db9546fca922df91ed2e142189be5f73f4f1b74a61c54b.scope - libcontainer container b14b343a3f36122c53db9546fca922df91ed2e142189be5f73f4f1b74a61c54b. Jun 25 18:53:19.318440 kubelet[2620]: I0625 18:53:19.317790 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3-cilium-config-path\") pod \"cilium-operator-599987898-l4v5g\" (UID: \"c5a152c9-d3fc-48af-aec7-8affbd9e2cf3\") " pod="kube-system/cilium-operator-599987898-l4v5g" Jun 25 18:53:19.318440 kubelet[2620]: I0625 18:53:19.317852 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppjwv\" (UniqueName: \"kubernetes.io/projected/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3-kube-api-access-ppjwv\") pod \"cilium-operator-599987898-l4v5g\" (UID: \"c5a152c9-d3fc-48af-aec7-8affbd9e2cf3\") " pod="kube-system/cilium-operator-599987898-l4v5g" Jun 25 18:53:19.318914 systemd[1]: Created slice kubepods-besteffort-podc5a152c9_d3fc_48af_aec7_8affbd9e2cf3.slice - libcontainer container kubepods-besteffort-podc5a152c9_d3fc_48af_aec7_8affbd9e2cf3.slice. Jun 25 18:53:19.349987 containerd[1448]: time="2024-06-25T18:53:19.349760635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4pkc,Uid:a571a20a-5c2c-46fc-9c7b-b23f7474aff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b14b343a3f36122c53db9546fca922df91ed2e142189be5f73f4f1b74a61c54b\"" Jun 25 18:53:19.354541 containerd[1448]: time="2024-06-25T18:53:19.354410779Z" level=info msg="CreateContainer within sandbox \"b14b343a3f36122c53db9546fca922df91ed2e142189be5f73f4f1b74a61c54b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:53:19.384889 containerd[1448]: time="2024-06-25T18:53:19.384839549Z" level=info msg="CreateContainer within sandbox \"b14b343a3f36122c53db9546fca922df91ed2e142189be5f73f4f1b74a61c54b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2697d80167d2e088fc5e0839ba3343bd06882badade14518ed1380cf68dcf6a\"" Jun 25 18:53:19.385569 containerd[1448]: time="2024-06-25T18:53:19.385537550Z" level=info msg="StartContainer for \"b2697d80167d2e088fc5e0839ba3343bd06882badade14518ed1380cf68dcf6a\"" Jun 25 18:53:19.422197 systemd[1]: Started cri-containerd-b2697d80167d2e088fc5e0839ba3343bd06882badade14518ed1380cf68dcf6a.scope - libcontainer container b2697d80167d2e088fc5e0839ba3343bd06882badade14518ed1380cf68dcf6a. Jun 25 18:53:19.461844 containerd[1448]: time="2024-06-25T18:53:19.461714002Z" level=info msg="StartContainer for \"b2697d80167d2e088fc5e0839ba3343bd06882badade14518ed1380cf68dcf6a\" returns successfully" Jun 25 18:53:20.018611 kubelet[2620]: E0625 18:53:20.018496 2620 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jun 25 18:53:20.018848 kubelet[2620]: E0625 18:53:20.018681 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-config-path podName:225bfc45-5212-4b33-86be-ccbb0aca6df4 nodeName:}" failed. No retries permitted until 2024-06-25 18:53:20.518627692 +0000 UTC m=+15.987036854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-config-path") pod "cilium-97lh9" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4") : failed to sync configmap cache: timed out waiting for the condition Jun 25 18:53:20.031985 kubelet[2620]: E0625 18:53:20.028393 2620 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jun 25 18:53:20.031985 kubelet[2620]: E0625 18:53:20.028550 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/225bfc45-5212-4b33-86be-ccbb0aca6df4-clustermesh-secrets podName:225bfc45-5212-4b33-86be-ccbb0aca6df4 nodeName:}" failed. No retries permitted until 2024-06-25 18:53:20.5285102 +0000 UTC m=+15.996919352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/225bfc45-5212-4b33-86be-ccbb0aca6df4-clustermesh-secrets") pod "cilium-97lh9" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4") : failed to sync secret cache: timed out waiting for the condition Jun 25 18:53:20.226967 containerd[1448]: time="2024-06-25T18:53:20.226827788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l4v5g,Uid:c5a152c9-d3fc-48af-aec7-8affbd9e2cf3,Namespace:kube-system,Attempt:0,}" Jun 25 18:53:20.293815 containerd[1448]: time="2024-06-25T18:53:20.293668167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:53:20.294219 containerd[1448]: time="2024-06-25T18:53:20.294044450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:20.294742 containerd[1448]: time="2024-06-25T18:53:20.294405293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:53:20.294896 containerd[1448]: time="2024-06-25T18:53:20.294559108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:20.344440 systemd[1]: Started cri-containerd-e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243.scope - libcontainer container e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243. Jun 25 18:53:20.388188 containerd[1448]: time="2024-06-25T18:53:20.388154638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l4v5g,Uid:c5a152c9-d3fc-48af-aec7-8affbd9e2cf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243\"" Jun 25 18:53:20.391212 containerd[1448]: time="2024-06-25T18:53:20.391184858Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 25 18:53:20.710956 containerd[1448]: time="2024-06-25T18:53:20.710009555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97lh9,Uid:225bfc45-5212-4b33-86be-ccbb0aca6df4,Namespace:kube-system,Attempt:0,}" Jun 25 18:53:20.759370 containerd[1448]: time="2024-06-25T18:53:20.758339738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:53:20.759370 containerd[1448]: time="2024-06-25T18:53:20.758468024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:20.759370 containerd[1448]: time="2024-06-25T18:53:20.758549119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:53:20.759370 containerd[1448]: time="2024-06-25T18:53:20.758592302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:20.797401 systemd[1]: Started cri-containerd-174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508.scope - libcontainer container 174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508. Jun 25 18:53:20.839852 containerd[1448]: time="2024-06-25T18:53:20.839655636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97lh9,Uid:225bfc45-5212-4b33-86be-ccbb0aca6df4,Namespace:kube-system,Attempt:0,} returns sandbox id \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\"" Jun 25 18:53:22.260172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406699038.mount: Deactivated successfully. Jun 25 18:53:24.648296 containerd[1448]: time="2024-06-25T18:53:24.648234169Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:53:24.649653 containerd[1448]: time="2024-06-25T18:53:24.649607523Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Jun 25 18:53:24.650847 containerd[1448]: time="2024-06-25T18:53:24.650820579Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:53:24.652284 containerd[1448]: time="2024-06-25T18:53:24.652258146Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.260912089s" Jun 25 18:53:24.652386 containerd[1448]: time="2024-06-25T18:53:24.652352327Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 25 18:53:24.657302 containerd[1448]: time="2024-06-25T18:53:24.657277491Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 25 18:53:24.679591 containerd[1448]: time="2024-06-25T18:53:24.679203515Z" level=info msg="CreateContainer within sandbox \"e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 25 18:53:24.715143 containerd[1448]: time="2024-06-25T18:53:24.715109577Z" level=info msg="CreateContainer within sandbox \"e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\"" Jun 25 18:53:24.718989 containerd[1448]: time="2024-06-25T18:53:24.718777680Z" level=info msg="StartContainer for \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\"" Jun 25 18:53:24.748577 systemd[1]: run-containerd-runc-k8s.io-95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2-runc.CMzXXt.mount: Deactivated successfully. Jun 25 18:53:24.760097 systemd[1]: Started cri-containerd-95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2.scope - libcontainer container 95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2. Jun 25 18:53:24.794356 containerd[1448]: time="2024-06-25T18:53:24.794304427Z" level=info msg="StartContainer for \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\" returns successfully" Jun 25 18:53:24.904624 kubelet[2620]: I0625 18:53:24.903532 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f4pkc" podStartSLOduration=6.903507389 podStartE2EDuration="6.903507389s" podCreationTimestamp="2024-06-25 18:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:53:19.859766699 +0000 UTC m=+15.328175801" watchObservedRunningTime="2024-06-25 18:53:24.903507389 +0000 UTC m=+20.371916491" Jun 25 18:53:24.905878 kubelet[2620]: I0625 18:53:24.905463 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-l4v5g" podStartSLOduration=1.639025509 podStartE2EDuration="5.905443541s" podCreationTimestamp="2024-06-25 18:53:19 +0000 UTC" firstStartedPulling="2024-06-25 18:53:20.389458031 +0000 UTC m=+15.857867143" lastFinishedPulling="2024-06-25 18:53:24.655876063 +0000 UTC m=+20.124285175" observedRunningTime="2024-06-25 18:53:24.894580691 +0000 UTC m=+20.362989803" watchObservedRunningTime="2024-06-25 18:53:24.905443541 +0000 UTC m=+20.373852663" Jun 25 18:53:29.901949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952321197.mount: Deactivated successfully. Jun 25 18:53:33.001834 containerd[1448]: time="2024-06-25T18:53:33.001718640Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:53:33.003400 containerd[1448]: time="2024-06-25T18:53:33.003291530Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735303" Jun 25 18:53:33.006205 containerd[1448]: time="2024-06-25T18:53:33.006110283Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:53:33.014339 containerd[1448]: time="2024-06-25T18:53:33.013247428Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.355810632s" Jun 25 18:53:33.014339 containerd[1448]: time="2024-06-25T18:53:33.013326700Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 25 18:53:33.021417 containerd[1448]: time="2024-06-25T18:53:33.021324673Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:53:33.211239 containerd[1448]: time="2024-06-25T18:53:33.211125559Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\"" Jun 25 18:53:33.212829 containerd[1448]: time="2024-06-25T18:53:33.212243026Z" level=info msg="StartContainer for \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\"" Jun 25 18:53:33.455147 systemd[1]: Started cri-containerd-55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262.scope - libcontainer container 55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262. Jun 25 18:53:33.511240 systemd[1]: cri-containerd-55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262.scope: Deactivated successfully. Jun 25 18:53:33.550595 containerd[1448]: time="2024-06-25T18:53:33.550492741Z" level=info msg="StartContainer for \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\" returns successfully" Jun 25 18:53:34.156880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262-rootfs.mount: Deactivated successfully. Jun 25 18:53:34.302325 containerd[1448]: time="2024-06-25T18:53:34.254556971Z" level=info msg="shim disconnected" id=55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262 namespace=k8s.io Jun 25 18:53:34.302325 containerd[1448]: time="2024-06-25T18:53:34.302317545Z" level=warning msg="cleaning up after shim disconnected" id=55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262 namespace=k8s.io Jun 25 18:53:34.307751 containerd[1448]: time="2024-06-25T18:53:34.302349857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:53:34.946558 containerd[1448]: time="2024-06-25T18:53:34.946314497Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:53:34.985023 containerd[1448]: time="2024-06-25T18:53:34.984532897Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\"" Jun 25 18:53:34.989001 containerd[1448]: time="2024-06-25T18:53:34.988596251Z" level=info msg="StartContainer for \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\"" Jun 25 18:53:35.046129 systemd[1]: Started cri-containerd-df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a.scope - libcontainer container df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a. Jun 25 18:53:35.073175 containerd[1448]: time="2024-06-25T18:53:35.073117403Z" level=info msg="StartContainer for \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\" returns successfully" Jun 25 18:53:35.084694 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:53:35.085459 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:53:35.085523 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:53:35.090205 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:53:35.090449 systemd[1]: cri-containerd-df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a.scope: Deactivated successfully. Jun 25 18:53:35.147511 containerd[1448]: time="2024-06-25T18:53:35.147455927Z" level=info msg="shim disconnected" id=df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a namespace=k8s.io Jun 25 18:53:35.147511 containerd[1448]: time="2024-06-25T18:53:35.147503478Z" level=warning msg="cleaning up after shim disconnected" id=df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a namespace=k8s.io Jun 25 18:53:35.147511 containerd[1448]: time="2024-06-25T18:53:35.147514509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:53:35.148364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:53:35.154682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a-rootfs.mount: Deactivated successfully. Jun 25 18:53:35.955232 containerd[1448]: time="2024-06-25T18:53:35.954590682Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:53:36.081926 containerd[1448]: time="2024-06-25T18:53:36.081689763Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\"" Jun 25 18:53:36.083561 containerd[1448]: time="2024-06-25T18:53:36.082509451Z" level=info msg="StartContainer for \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\"" Jun 25 18:53:36.135183 systemd[1]: Started cri-containerd-a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7.scope - libcontainer container a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7. Jun 25 18:53:36.156794 systemd[1]: run-containerd-runc-k8s.io-a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7-runc.KLVUWz.mount: Deactivated successfully. Jun 25 18:53:36.172692 systemd[1]: cri-containerd-a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7.scope: Deactivated successfully. Jun 25 18:53:36.328284 containerd[1448]: time="2024-06-25T18:53:36.328190152Z" level=info msg="StartContainer for \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\" returns successfully" Jun 25 18:53:36.361070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7-rootfs.mount: Deactivated successfully. Jun 25 18:53:36.385771 containerd[1448]: time="2024-06-25T18:53:36.385661743Z" level=info msg="shim disconnected" id=a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7 namespace=k8s.io Jun 25 18:53:36.385771 containerd[1448]: time="2024-06-25T18:53:36.385733179Z" level=warning msg="cleaning up after shim disconnected" id=a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7 namespace=k8s.io Jun 25 18:53:36.385771 containerd[1448]: time="2024-06-25T18:53:36.385743789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:53:36.966474 containerd[1448]: time="2024-06-25T18:53:36.966379953Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:53:36.996084 containerd[1448]: time="2024-06-25T18:53:36.996005536Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\"" Jun 25 18:53:36.999265 containerd[1448]: time="2024-06-25T18:53:36.997094334Z" level=info msg="StartContainer for \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\"" Jun 25 18:53:37.050096 systemd[1]: Started cri-containerd-71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f.scope - libcontainer container 71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f. Jun 25 18:53:37.087491 systemd[1]: cri-containerd-71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f.scope: Deactivated successfully. Jun 25 18:53:37.094262 containerd[1448]: time="2024-06-25T18:53:37.094151738Z" level=info msg="StartContainer for \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\" returns successfully" Jun 25 18:53:37.130677 containerd[1448]: time="2024-06-25T18:53:37.130553755Z" level=info msg="shim disconnected" id=71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f namespace=k8s.io Jun 25 18:53:37.130677 containerd[1448]: time="2024-06-25T18:53:37.130606163Z" level=warning msg="cleaning up after shim disconnected" id=71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f namespace=k8s.io Jun 25 18:53:37.130677 containerd[1448]: time="2024-06-25T18:53:37.130619037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:53:37.155792 systemd[1]: run-containerd-runc-k8s.io-71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f-runc.N6dmsk.mount: Deactivated successfully. Jun 25 18:53:37.155903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f-rootfs.mount: Deactivated successfully. Jun 25 18:53:37.970316 containerd[1448]: time="2024-06-25T18:53:37.970131768Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:53:38.096139 containerd[1448]: time="2024-06-25T18:53:38.096061126Z" level=info msg="CreateContainer within sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\"" Jun 25 18:53:38.098847 containerd[1448]: time="2024-06-25T18:53:38.096717933Z" level=info msg="StartContainer for \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\"" Jun 25 18:53:38.153117 systemd[1]: Started cri-containerd-345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d.scope - libcontainer container 345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d. Jun 25 18:53:38.207051 containerd[1448]: time="2024-06-25T18:53:38.207000089Z" level=info msg="StartContainer for \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\" returns successfully" Jun 25 18:53:38.436555 kubelet[2620]: I0625 18:53:38.436517 2620 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:53:38.465891 kubelet[2620]: I0625 18:53:38.465844 2620 topology_manager.go:215] "Topology Admit Handler" podUID="839d114a-6b07-48ef-b940-12e31dcdea93" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wbn2x" Jun 25 18:53:38.477984 systemd[1]: Created slice kubepods-burstable-pod839d114a_6b07_48ef_b940_12e31dcdea93.slice - libcontainer container kubepods-burstable-pod839d114a_6b07_48ef_b940_12e31dcdea93.slice. Jun 25 18:53:38.480696 kubelet[2620]: I0625 18:53:38.480646 2620 topology_manager.go:215] "Topology Admit Handler" podUID="04817f48-4d62-4848-92d5-78c2e8f177d8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8swvs" Jun 25 18:53:38.488863 systemd[1]: Created slice kubepods-burstable-pod04817f48_4d62_4848_92d5_78c2e8f177d8.slice - libcontainer container kubepods-burstable-pod04817f48_4d62_4848_92d5_78c2e8f177d8.slice. Jun 25 18:53:38.563764 kubelet[2620]: I0625 18:53:38.563723 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/839d114a-6b07-48ef-b940-12e31dcdea93-config-volume\") pod \"coredns-7db6d8ff4d-wbn2x\" (UID: \"839d114a-6b07-48ef-b940-12e31dcdea93\") " pod="kube-system/coredns-7db6d8ff4d-wbn2x" Jun 25 18:53:38.563764 kubelet[2620]: I0625 18:53:38.563770 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5zt9\" (UniqueName: \"kubernetes.io/projected/04817f48-4d62-4848-92d5-78c2e8f177d8-kube-api-access-v5zt9\") pod \"coredns-7db6d8ff4d-8swvs\" (UID: \"04817f48-4d62-4848-92d5-78c2e8f177d8\") " pod="kube-system/coredns-7db6d8ff4d-8swvs" Jun 25 18:53:38.564019 kubelet[2620]: I0625 18:53:38.563808 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj4ns\" (UniqueName: \"kubernetes.io/projected/839d114a-6b07-48ef-b940-12e31dcdea93-kube-api-access-mj4ns\") pod \"coredns-7db6d8ff4d-wbn2x\" (UID: \"839d114a-6b07-48ef-b940-12e31dcdea93\") " pod="kube-system/coredns-7db6d8ff4d-wbn2x" Jun 25 18:53:38.564019 kubelet[2620]: I0625 18:53:38.563830 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04817f48-4d62-4848-92d5-78c2e8f177d8-config-volume\") pod \"coredns-7db6d8ff4d-8swvs\" (UID: \"04817f48-4d62-4848-92d5-78c2e8f177d8\") " pod="kube-system/coredns-7db6d8ff4d-8swvs" Jun 25 18:53:38.798300 containerd[1448]: time="2024-06-25T18:53:38.797988651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8swvs,Uid:04817f48-4d62-4848-92d5-78c2e8f177d8,Namespace:kube-system,Attempt:0,}" Jun 25 18:53:38.798737 containerd[1448]: time="2024-06-25T18:53:38.798677098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wbn2x,Uid:839d114a-6b07-48ef-b940-12e31dcdea93,Namespace:kube-system,Attempt:0,}" Jun 25 18:53:40.439777 systemd-networkd[1368]: cilium_host: Link UP Jun 25 18:53:40.441859 systemd-networkd[1368]: cilium_net: Link UP Jun 25 18:53:40.443200 systemd-networkd[1368]: cilium_net: Gained carrier Jun 25 18:53:40.444226 systemd-networkd[1368]: cilium_host: Gained carrier Jun 25 18:53:40.606256 systemd-networkd[1368]: cilium_vxlan: Link UP Jun 25 18:53:40.606271 systemd-networkd[1368]: cilium_vxlan: Gained carrier Jun 25 18:53:41.116316 systemd-networkd[1368]: cilium_net: Gained IPv6LL Jun 25 18:53:41.222049 kernel: NET: Registered PF_ALG protocol family Jun 25 18:53:41.243213 systemd-networkd[1368]: cilium_host: Gained IPv6LL Jun 25 18:53:42.052207 systemd-networkd[1368]: lxc_health: Link UP Jun 25 18:53:42.057374 systemd-networkd[1368]: lxc_health: Gained carrier Jun 25 18:53:42.204952 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Jun 25 18:53:42.447039 systemd-networkd[1368]: lxc39b44fb5fba1: Link UP Jun 25 18:53:42.453030 kernel: eth0: renamed from tmp8f5c7 Jun 25 18:53:42.473909 systemd-networkd[1368]: lxc39b44fb5fba1: Gained carrier Jun 25 18:53:42.481047 systemd-networkd[1368]: lxc8050dae02da4: Link UP Jun 25 18:53:42.492147 kernel: eth0: renamed from tmp2fd5e Jun 25 18:53:42.510464 systemd-networkd[1368]: lxc8050dae02da4: Gained carrier Jun 25 18:53:42.740881 kubelet[2620]: I0625 18:53:42.740741 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-97lh9" podStartSLOduration=12.566706416 podStartE2EDuration="24.740725583s" podCreationTimestamp="2024-06-25 18:53:18 +0000 UTC" firstStartedPulling="2024-06-25 18:53:20.8416059 +0000 UTC m=+16.310015012" lastFinishedPulling="2024-06-25 18:53:33.015625027 +0000 UTC m=+28.484034179" observedRunningTime="2024-06-25 18:53:39.000542449 +0000 UTC m=+34.468951581" watchObservedRunningTime="2024-06-25 18:53:42.740725583 +0000 UTC m=+38.209134685" Jun 25 18:53:43.739264 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jun 25 18:53:43.867156 systemd-networkd[1368]: lxc39b44fb5fba1: Gained IPv6LL Jun 25 18:53:44.187124 systemd-networkd[1368]: lxc8050dae02da4: Gained IPv6LL Jun 25 18:53:47.133866 containerd[1448]: time="2024-06-25T18:53:47.133701292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:53:47.133866 containerd[1448]: time="2024-06-25T18:53:47.133801381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:47.133866 containerd[1448]: time="2024-06-25T18:53:47.133828542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:53:47.136018 containerd[1448]: time="2024-06-25T18:53:47.133847888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:47.179366 systemd[1]: run-containerd-runc-k8s.io-8f5c72f34e95db96651575b63b1476d210bdf52c7de33a2ca4eb581c66e43d9a-runc.OT2k8W.mount: Deactivated successfully. Jun 25 18:53:47.194736 systemd[1]: Started cri-containerd-8f5c72f34e95db96651575b63b1476d210bdf52c7de33a2ca4eb581c66e43d9a.scope - libcontainer container 8f5c72f34e95db96651575b63b1476d210bdf52c7de33a2ca4eb581c66e43d9a. Jun 25 18:53:47.231292 containerd[1448]: time="2024-06-25T18:53:47.231161726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:53:47.231292 containerd[1448]: time="2024-06-25T18:53:47.231233932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:47.231292 containerd[1448]: time="2024-06-25T18:53:47.231263858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:53:47.231534 containerd[1448]: time="2024-06-25T18:53:47.231277243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:53:47.278038 systemd[1]: Started cri-containerd-2fd5e01f5ee9ce3e3cce0356959d3e23e4e8b2d9339dfdda237fbedee8ed3698.scope - libcontainer container 2fd5e01f5ee9ce3e3cce0356959d3e23e4e8b2d9339dfdda237fbedee8ed3698. Jun 25 18:53:47.285107 containerd[1448]: time="2024-06-25T18:53:47.285063123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8swvs,Uid:04817f48-4d62-4848-92d5-78c2e8f177d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f5c72f34e95db96651575b63b1476d210bdf52c7de33a2ca4eb581c66e43d9a\"" Jun 25 18:53:47.291618 containerd[1448]: time="2024-06-25T18:53:47.291569790Z" level=info msg="CreateContainer within sandbox \"8f5c72f34e95db96651575b63b1476d210bdf52c7de33a2ca4eb581c66e43d9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:53:47.319130 containerd[1448]: time="2024-06-25T18:53:47.318968926Z" level=info msg="CreateContainer within sandbox \"8f5c72f34e95db96651575b63b1476d210bdf52c7de33a2ca4eb581c66e43d9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55696509fe577e732340d4904ab6dfa076acfebdaded6822f73eba8fe3711fc6\"" Jun 25 18:53:47.320475 containerd[1448]: time="2024-06-25T18:53:47.320443905Z" level=info msg="StartContainer for \"55696509fe577e732340d4904ab6dfa076acfebdaded6822f73eba8fe3711fc6\"" Jun 25 18:53:47.351442 containerd[1448]: time="2024-06-25T18:53:47.351403886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wbn2x,Uid:839d114a-6b07-48ef-b940-12e31dcdea93,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fd5e01f5ee9ce3e3cce0356959d3e23e4e8b2d9339dfdda237fbedee8ed3698\"" Jun 25 18:53:47.361959 containerd[1448]: time="2024-06-25T18:53:47.361732329Z" level=info msg="CreateContainer within sandbox \"2fd5e01f5ee9ce3e3cce0356959d3e23e4e8b2d9339dfdda237fbedee8ed3698\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:53:47.371421 systemd[1]: Started cri-containerd-55696509fe577e732340d4904ab6dfa076acfebdaded6822f73eba8fe3711fc6.scope - libcontainer container 55696509fe577e732340d4904ab6dfa076acfebdaded6822f73eba8fe3711fc6. Jun 25 18:53:47.383658 containerd[1448]: time="2024-06-25T18:53:47.383143691Z" level=info msg="CreateContainer within sandbox \"2fd5e01f5ee9ce3e3cce0356959d3e23e4e8b2d9339dfdda237fbedee8ed3698\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0b74e045b78021fe8784defccffb3ca4b3494380ef9aedc18be8631638c25b1\"" Jun 25 18:53:47.386554 containerd[1448]: time="2024-06-25T18:53:47.384017051Z" level=info msg="StartContainer for \"e0b74e045b78021fe8784defccffb3ca4b3494380ef9aedc18be8631638c25b1\"" Jun 25 18:53:47.424118 systemd[1]: Started cri-containerd-e0b74e045b78021fe8784defccffb3ca4b3494380ef9aedc18be8631638c25b1.scope - libcontainer container e0b74e045b78021fe8784defccffb3ca4b3494380ef9aedc18be8631638c25b1. Jun 25 18:53:47.432782 containerd[1448]: time="2024-06-25T18:53:47.432400818Z" level=info msg="StartContainer for \"55696509fe577e732340d4904ab6dfa076acfebdaded6822f73eba8fe3711fc6\" returns successfully" Jun 25 18:53:47.466341 containerd[1448]: time="2024-06-25T18:53:47.466294206Z" level=info msg="StartContainer for \"e0b74e045b78021fe8784defccffb3ca4b3494380ef9aedc18be8631638c25b1\" returns successfully" Jun 25 18:53:48.070978 kubelet[2620]: I0625 18:53:48.069545 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wbn2x" podStartSLOduration=29.069515606 podStartE2EDuration="29.069515606s" podCreationTimestamp="2024-06-25 18:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:53:48.032297092 +0000 UTC m=+43.500706245" watchObservedRunningTime="2024-06-25 18:53:48.069515606 +0000 UTC m=+43.537924758" Jun 25 18:53:48.109579 kubelet[2620]: I0625 18:53:48.108870 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8swvs" podStartSLOduration=29.10874755 podStartE2EDuration="29.10874755s" podCreationTimestamp="2024-06-25 18:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:53:48.107369121 +0000 UTC m=+43.575778223" watchObservedRunningTime="2024-06-25 18:53:48.10874755 +0000 UTC m=+43.577156652" Jun 25 18:54:15.015791 systemd[1]: Started sshd@9-172.24.4.127:22-172.24.4.1:34850.service - OpenSSH per-connection server daemon (172.24.4.1:34850). Jun 25 18:54:16.435598 sshd[3981]: Accepted publickey for core from 172.24.4.1 port 34850 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:16.439096 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:16.448236 systemd-logind[1430]: New session 12 of user core. Jun 25 18:54:16.454118 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:54:18.171511 sshd[3981]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:18.185716 systemd[1]: sshd@9-172.24.4.127:22-172.24.4.1:34850.service: Deactivated successfully. Jun 25 18:54:18.186651 systemd-logind[1430]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:54:18.193185 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:54:18.200327 systemd-logind[1430]: Removed session 12. Jun 25 18:54:23.194734 systemd[1]: Started sshd@10-172.24.4.127:22-172.24.4.1:34864.service - OpenSSH per-connection server daemon (172.24.4.1:34864). Jun 25 18:54:24.820181 sshd[3997]: Accepted publickey for core from 172.24.4.1 port 34864 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:24.823686 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:24.836365 systemd-logind[1430]: New session 13 of user core. Jun 25 18:54:24.845249 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:54:25.710402 sshd[3997]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:25.714009 systemd[1]: sshd@10-172.24.4.127:22-172.24.4.1:34864.service: Deactivated successfully. Jun 25 18:54:25.716794 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:54:25.717912 systemd-logind[1430]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:54:25.719661 systemd-logind[1430]: Removed session 13. Jun 25 18:54:30.737626 systemd[1]: Started sshd@11-172.24.4.127:22-172.24.4.1:33672.service - OpenSSH per-connection server daemon (172.24.4.1:33672). Jun 25 18:54:32.134766 sshd[4010]: Accepted publickey for core from 172.24.4.1 port 33672 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:32.142176 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:32.149180 systemd-logind[1430]: New session 14 of user core. Jun 25 18:54:32.152148 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:54:33.043618 sshd[4010]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:33.064089 systemd[1]: Started sshd@12-172.24.4.127:22-172.24.4.1:33688.service - OpenSSH per-connection server daemon (172.24.4.1:33688). Jun 25 18:54:33.064725 systemd[1]: sshd@11-172.24.4.127:22-172.24.4.1:33672.service: Deactivated successfully. Jun 25 18:54:33.067633 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:54:33.078707 systemd-logind[1430]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:54:33.081331 systemd-logind[1430]: Removed session 14. Jun 25 18:54:34.947336 sshd[4023]: Accepted publickey for core from 172.24.4.1 port 33688 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:34.950092 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:34.960744 systemd-logind[1430]: New session 15 of user core. Jun 25 18:54:34.967271 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:54:35.801977 sshd[4023]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:35.815828 systemd[1]: sshd@12-172.24.4.127:22-172.24.4.1:33688.service: Deactivated successfully. Jun 25 18:54:35.819742 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:54:35.823134 systemd-logind[1430]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:54:35.838716 systemd[1]: Started sshd@13-172.24.4.127:22-172.24.4.1:47362.service - OpenSSH per-connection server daemon (172.24.4.1:47362). Jun 25 18:54:35.842678 systemd-logind[1430]: Removed session 15. Jun 25 18:54:37.492764 sshd[4036]: Accepted publickey for core from 172.24.4.1 port 47362 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:37.495811 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:37.504238 systemd-logind[1430]: New session 16 of user core. Jun 25 18:54:37.510368 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:54:38.426748 sshd[4036]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:38.433467 systemd[1]: sshd@13-172.24.4.127:22-172.24.4.1:47362.service: Deactivated successfully. Jun 25 18:54:38.437564 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:54:38.442208 systemd-logind[1430]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:54:38.445213 systemd-logind[1430]: Removed session 16. Jun 25 18:54:43.450764 systemd[1]: Started sshd@14-172.24.4.127:22-172.24.4.1:47368.service - OpenSSH per-connection server daemon (172.24.4.1:47368). Jun 25 18:54:44.621502 sshd[4050]: Accepted publickey for core from 172.24.4.1 port 47368 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:44.625151 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:44.639150 systemd-logind[1430]: New session 17 of user core. Jun 25 18:54:44.645404 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:54:45.251392 sshd[4050]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:45.259385 systemd[1]: sshd@14-172.24.4.127:22-172.24.4.1:47368.service: Deactivated successfully. Jun 25 18:54:45.264417 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:54:45.267811 systemd-logind[1430]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:54:45.271091 systemd-logind[1430]: Removed session 17. Jun 25 18:54:50.270534 systemd[1]: Started sshd@15-172.24.4.127:22-172.24.4.1:58542.service - OpenSSH per-connection server daemon (172.24.4.1:58542). Jun 25 18:54:51.820729 sshd[4065]: Accepted publickey for core from 172.24.4.1 port 58542 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:51.823646 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:51.834759 systemd-logind[1430]: New session 18 of user core. Jun 25 18:54:51.843242 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:54:52.553772 sshd[4065]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:52.569277 systemd[1]: sshd@15-172.24.4.127:22-172.24.4.1:58542.service: Deactivated successfully. Jun 25 18:54:52.575460 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:54:52.581924 systemd-logind[1430]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:54:52.589809 systemd[1]: Started sshd@16-172.24.4.127:22-172.24.4.1:58550.service - OpenSSH per-connection server daemon (172.24.4.1:58550). Jun 25 18:54:52.595542 systemd-logind[1430]: Removed session 18. Jun 25 18:54:54.111518 sshd[4077]: Accepted publickey for core from 172.24.4.1 port 58550 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:54.114179 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:54.124422 systemd-logind[1430]: New session 19 of user core. Jun 25 18:54:54.133302 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:54:55.585458 systemd[1]: Started sshd@17-172.24.4.127:22-172.24.4.1:53866.service - OpenSSH per-connection server daemon (172.24.4.1:53866). Jun 25 18:54:55.591157 sshd[4077]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:55.621371 systemd[1]: sshd@16-172.24.4.127:22-172.24.4.1:58550.service: Deactivated successfully. Jun 25 18:54:55.625683 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:54:55.631471 systemd-logind[1430]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:54:55.636769 systemd-logind[1430]: Removed session 19. Jun 25 18:54:57.023982 sshd[4086]: Accepted publickey for core from 172.24.4.1 port 53866 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:54:57.027102 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:54:57.037779 systemd-logind[1430]: New session 20 of user core. Jun 25 18:54:57.043264 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:54:59.730453 sshd[4086]: pam_unix(sshd:session): session closed for user core Jun 25 18:54:59.743483 systemd[1]: sshd@17-172.24.4.127:22-172.24.4.1:53866.service: Deactivated successfully. Jun 25 18:54:59.747714 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:54:59.752496 systemd-logind[1430]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:54:59.761566 systemd[1]: Started sshd@18-172.24.4.127:22-172.24.4.1:53870.service - OpenSSH per-connection server daemon (172.24.4.1:53870). Jun 25 18:54:59.765834 systemd-logind[1430]: Removed session 20. Jun 25 18:55:01.257904 sshd[4107]: Accepted publickey for core from 172.24.4.1 port 53870 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:55:01.262022 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:55:01.283077 systemd-logind[1430]: New session 21 of user core. Jun 25 18:55:01.293600 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:55:02.515590 sshd[4107]: pam_unix(sshd:session): session closed for user core Jun 25 18:55:02.529351 systemd[1]: sshd@18-172.24.4.127:22-172.24.4.1:53870.service: Deactivated successfully. Jun 25 18:55:02.535338 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:55:02.539265 systemd-logind[1430]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:55:02.545492 systemd[1]: Started sshd@19-172.24.4.127:22-172.24.4.1:53878.service - OpenSSH per-connection server daemon (172.24.4.1:53878). Jun 25 18:55:02.548451 systemd-logind[1430]: Removed session 21. Jun 25 18:55:04.175492 sshd[4118]: Accepted publickey for core from 172.24.4.1 port 53878 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:55:04.186420 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:55:04.198149 systemd-logind[1430]: New session 22 of user core. Jun 25 18:55:04.204329 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:55:04.920342 sshd[4118]: pam_unix(sshd:session): session closed for user core Jun 25 18:55:04.925418 systemd[1]: sshd@19-172.24.4.127:22-172.24.4.1:53878.service: Deactivated successfully. Jun 25 18:55:04.927511 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:55:04.929108 systemd-logind[1430]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:55:04.930742 systemd-logind[1430]: Removed session 22. Jun 25 18:55:09.941306 systemd[1]: Started sshd@20-172.24.4.127:22-172.24.4.1:54894.service - OpenSSH per-connection server daemon (172.24.4.1:54894). Jun 25 18:55:11.303018 sshd[4137]: Accepted publickey for core from 172.24.4.1 port 54894 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:55:11.306334 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:55:11.321103 systemd-logind[1430]: New session 23 of user core. Jun 25 18:55:11.328210 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:55:12.226466 sshd[4137]: pam_unix(sshd:session): session closed for user core Jun 25 18:55:12.233356 systemd-logind[1430]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:55:12.234038 systemd[1]: sshd@20-172.24.4.127:22-172.24.4.1:54894.service: Deactivated successfully. Jun 25 18:55:12.240182 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:55:12.246119 systemd-logind[1430]: Removed session 23. Jun 25 18:55:17.246541 systemd[1]: Started sshd@21-172.24.4.127:22-172.24.4.1:46648.service - OpenSSH per-connection server daemon (172.24.4.1:46648). Jun 25 18:55:18.512556 sshd[4150]: Accepted publickey for core from 172.24.4.1 port 46648 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:55:18.515331 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:55:18.526716 systemd-logind[1430]: New session 24 of user core. Jun 25 18:55:18.534290 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:55:19.422372 sshd[4150]: pam_unix(sshd:session): session closed for user core Jun 25 18:55:19.434441 systemd[1]: sshd@21-172.24.4.127:22-172.24.4.1:46648.service: Deactivated successfully. Jun 25 18:55:19.439418 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:55:19.442266 systemd-logind[1430]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:55:19.456628 systemd[1]: Started sshd@22-172.24.4.127:22-172.24.4.1:46652.service - OpenSSH per-connection server daemon (172.24.4.1:46652). Jun 25 18:55:19.459622 systemd-logind[1430]: Removed session 24. Jun 25 18:55:20.930448 sshd[4163]: Accepted publickey for core from 172.24.4.1 port 46652 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:55:20.933919 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:55:20.945139 systemd-logind[1430]: New session 25 of user core. Jun 25 18:55:20.951281 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:55:23.201494 systemd[1]: run-containerd-runc-k8s.io-345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d-runc.MsApXV.mount: Deactivated successfully. Jun 25 18:55:23.250881 containerd[1448]: time="2024-06-25T18:55:23.250567432Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:55:23.349020 containerd[1448]: time="2024-06-25T18:55:23.348839319Z" level=info msg="StopContainer for \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\" with timeout 2 (s)" Jun 25 18:55:23.349777 containerd[1448]: time="2024-06-25T18:55:23.349321947Z" level=info msg="StopContainer for \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\" with timeout 30 (s)" Jun 25 18:55:23.366652 containerd[1448]: time="2024-06-25T18:55:23.365285804Z" level=info msg="Stop container \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\" with signal terminated" Jun 25 18:55:23.368166 containerd[1448]: time="2024-06-25T18:55:23.368105087Z" level=info msg="Stop container \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\" with signal terminated" Jun 25 18:55:23.387674 systemd-networkd[1368]: lxc_health: Link DOWN Jun 25 18:55:23.387692 systemd-networkd[1368]: lxc_health: Lost carrier Jun 25 18:55:23.420253 systemd[1]: cri-containerd-345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d.scope: Deactivated successfully. Jun 25 18:55:23.420529 systemd[1]: cri-containerd-345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d.scope: Consumed 8.838s CPU time. Jun 25 18:55:23.444376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d-rootfs.mount: Deactivated successfully. Jun 25 18:55:23.448489 systemd[1]: cri-containerd-95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2.scope: Deactivated successfully. Jun 25 18:55:23.474185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2-rootfs.mount: Deactivated successfully. Jun 25 18:55:23.488532 containerd[1448]: time="2024-06-25T18:55:23.488424935Z" level=info msg="shim disconnected" id=95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2 namespace=k8s.io Jun 25 18:55:23.488768 containerd[1448]: time="2024-06-25T18:55:23.488544612Z" level=warning msg="cleaning up after shim disconnected" id=95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2 namespace=k8s.io Jun 25 18:55:23.488768 containerd[1448]: time="2024-06-25T18:55:23.488559169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:55:23.488768 containerd[1448]: time="2024-06-25T18:55:23.488742658Z" level=info msg="shim disconnected" id=345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d namespace=k8s.io Jun 25 18:55:23.489011 containerd[1448]: time="2024-06-25T18:55:23.488773557Z" level=warning msg="cleaning up after shim disconnected" id=345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d namespace=k8s.io Jun 25 18:55:23.489011 containerd[1448]: time="2024-06-25T18:55:23.488967927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:55:23.547855 containerd[1448]: time="2024-06-25T18:55:23.547798016Z" level=info msg="StopContainer for \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\" returns successfully" Jun 25 18:55:23.548900 containerd[1448]: time="2024-06-25T18:55:23.548648454Z" level=info msg="StopPodSandbox for \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\"" Jun 25 18:55:23.552134 containerd[1448]: time="2024-06-25T18:55:23.551636898Z" level=info msg="StopContainer for \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\" returns successfully" Jun 25 18:55:23.552134 containerd[1448]: time="2024-06-25T18:55:23.552050535Z" level=info msg="StopPodSandbox for \"e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243\"" Jun 25 18:55:23.552535 containerd[1448]: time="2024-06-25T18:55:23.548699281Z" level=info msg="Container to stop \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:55:23.552535 containerd[1448]: time="2024-06-25T18:55:23.552431010Z" level=info msg="Container to stop \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:55:23.552535 containerd[1448]: time="2024-06-25T18:55:23.552450907Z" level=info msg="Container to stop \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:55:23.552535 containerd[1448]: time="2024-06-25T18:55:23.552464653Z" level=info msg="Container to stop \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:55:23.552535 containerd[1448]: time="2024-06-25T18:55:23.552477538Z" level=info msg="Container to stop \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:55:23.554352 containerd[1448]: time="2024-06-25T18:55:23.552077387Z" level=info msg="Container to stop \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:55:23.555615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508-shm.mount: Deactivated successfully. Jun 25 18:55:23.564331 systemd[1]: cri-containerd-174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508.scope: Deactivated successfully. Jun 25 18:55:23.568523 systemd[1]: cri-containerd-e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243.scope: Deactivated successfully. Jun 25 18:55:23.617832 containerd[1448]: time="2024-06-25T18:55:23.617772354Z" level=info msg="shim disconnected" id=e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243 namespace=k8s.io Jun 25 18:55:23.618956 containerd[1448]: time="2024-06-25T18:55:23.618565543Z" level=warning msg="cleaning up after shim disconnected" id=e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243 namespace=k8s.io Jun 25 18:55:23.618956 containerd[1448]: time="2024-06-25T18:55:23.618597724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:55:23.618956 containerd[1448]: time="2024-06-25T18:55:23.617863268Z" level=info msg="shim disconnected" id=174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508 namespace=k8s.io Jun 25 18:55:23.618956 containerd[1448]: time="2024-06-25T18:55:23.618834465Z" level=warning msg="cleaning up after shim disconnected" id=174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508 namespace=k8s.io Jun 25 18:55:23.618956 containerd[1448]: time="2024-06-25T18:55:23.618843662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:55:23.638373 containerd[1448]: time="2024-06-25T18:55:23.638143946Z" level=info msg="TearDown network for sandbox \"e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243\" successfully" Jun 25 18:55:23.638373 containerd[1448]: time="2024-06-25T18:55:23.638200142Z" level=info msg="StopPodSandbox for \"e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243\" returns successfully" Jun 25 18:55:23.639619 containerd[1448]: time="2024-06-25T18:55:23.639046012Z" level=info msg="TearDown network for sandbox \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" successfully" Jun 25 18:55:23.639619 containerd[1448]: time="2024-06-25T18:55:23.639067353Z" level=info msg="StopPodSandbox for \"174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508\" returns successfully" Jun 25 18:55:24.089256 kubelet[2620]: I0625 18:55:24.089193 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-host-proc-sys-net\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.091654 kubelet[2620]: I0625 18:55:24.089405 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.091654 kubelet[2620]: I0625 18:55:24.090090 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-bpf-maps\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.091654 kubelet[2620]: I0625 18:55:24.090243 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-config-path\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.091654 kubelet[2620]: I0625 18:55:24.090139 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.091654 kubelet[2620]: I0625 18:55:24.090299 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-etc-cni-netd\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.091654 kubelet[2620]: I0625 18:55:24.090345 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-run\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092188 kubelet[2620]: I0625 18:55:24.090398 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/225bfc45-5212-4b33-86be-ccbb0aca6df4-clustermesh-secrets\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092188 kubelet[2620]: I0625 18:55:24.090445 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49fsh\" (UniqueName: \"kubernetes.io/projected/225bfc45-5212-4b33-86be-ccbb0aca6df4-kube-api-access-49fsh\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092188 kubelet[2620]: I0625 18:55:24.090485 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-xtables-lock\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092188 kubelet[2620]: I0625 18:55:24.090527 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-cgroup\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092188 kubelet[2620]: I0625 18:55:24.090567 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-hostproc\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092188 kubelet[2620]: I0625 18:55:24.090610 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/225bfc45-5212-4b33-86be-ccbb0aca6df4-hubble-tls\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092552 kubelet[2620]: I0625 18:55:24.090648 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-lib-modules\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092552 kubelet[2620]: I0625 18:55:24.090695 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppjwv\" (UniqueName: \"kubernetes.io/projected/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3-kube-api-access-ppjwv\") pod \"c5a152c9-d3fc-48af-aec7-8affbd9e2cf3\" (UID: \"c5a152c9-d3fc-48af-aec7-8affbd9e2cf3\") " Jun 25 18:55:24.092552 kubelet[2620]: I0625 18:55:24.090741 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3-cilium-config-path\") pod \"c5a152c9-d3fc-48af-aec7-8affbd9e2cf3\" (UID: \"c5a152c9-d3fc-48af-aec7-8affbd9e2cf3\") " Jun 25 18:55:24.092552 kubelet[2620]: I0625 18:55:24.090783 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-host-proc-sys-kernel\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092552 kubelet[2620]: I0625 18:55:24.090820 2620 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cni-path\") pod \"225bfc45-5212-4b33-86be-ccbb0aca6df4\" (UID: \"225bfc45-5212-4b33-86be-ccbb0aca6df4\") " Jun 25 18:55:24.092552 kubelet[2620]: I0625 18:55:24.090896 2620 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-host-proc-sys-net\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.092915 kubelet[2620]: I0625 18:55:24.090924 2620 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-bpf-maps\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.092915 kubelet[2620]: I0625 18:55:24.091037 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cni-path" (OuterVolumeSpecName: "cni-path") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.092915 kubelet[2620]: I0625 18:55:24.091080 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.092915 kubelet[2620]: I0625 18:55:24.091116 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.096300 kubelet[2620]: I0625 18:55:24.096124 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:55:24.110003 kubelet[2620]: I0625 18:55:24.109573 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.112610 kubelet[2620]: I0625 18:55:24.112520 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.116213 kubelet[2620]: I0625 18:55:24.115522 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.116213 kubelet[2620]: I0625 18:55:24.115611 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.116213 kubelet[2620]: I0625 18:55:24.115640 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-hostproc" (OuterVolumeSpecName: "hostproc") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:55:24.116213 kubelet[2620]: I0625 18:55:24.115866 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/225bfc45-5212-4b33-86be-ccbb0aca6df4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:55:24.121863 kubelet[2620]: I0625 18:55:24.121629 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/225bfc45-5212-4b33-86be-ccbb0aca6df4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:55:24.122261 kubelet[2620]: I0625 18:55:24.122185 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/225bfc45-5212-4b33-86be-ccbb0aca6df4-kube-api-access-49fsh" (OuterVolumeSpecName: "kube-api-access-49fsh") pod "225bfc45-5212-4b33-86be-ccbb0aca6df4" (UID: "225bfc45-5212-4b33-86be-ccbb0aca6df4"). InnerVolumeSpecName "kube-api-access-49fsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:55:24.123588 kubelet[2620]: I0625 18:55:24.123451 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3-kube-api-access-ppjwv" (OuterVolumeSpecName: "kube-api-access-ppjwv") pod "c5a152c9-d3fc-48af-aec7-8affbd9e2cf3" (UID: "c5a152c9-d3fc-48af-aec7-8affbd9e2cf3"). InnerVolumeSpecName "kube-api-access-ppjwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:55:24.124467 kubelet[2620]: I0625 18:55:24.124362 2620 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c5a152c9-d3fc-48af-aec7-8affbd9e2cf3" (UID: "c5a152c9-d3fc-48af-aec7-8affbd9e2cf3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:55:24.193020 kubelet[2620]: I0625 18:55:24.191376 2620 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-etc-cni-netd\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193020 kubelet[2620]: I0625 18:55:24.191443 2620 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-run\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193020 kubelet[2620]: I0625 18:55:24.191471 2620 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/225bfc45-5212-4b33-86be-ccbb0aca6df4-clustermesh-secrets\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193020 kubelet[2620]: I0625 18:55:24.191498 2620 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-49fsh\" (UniqueName: \"kubernetes.io/projected/225bfc45-5212-4b33-86be-ccbb0aca6df4-kube-api-access-49fsh\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193020 kubelet[2620]: I0625 18:55:24.191524 2620 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-xtables-lock\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193020 kubelet[2620]: I0625 18:55:24.191547 2620 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-cgroup\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193020 kubelet[2620]: I0625 18:55:24.191614 2620 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3-cilium-config-path\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193908 kubelet[2620]: I0625 18:55:24.191638 2620 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-hostproc\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193908 kubelet[2620]: I0625 18:55:24.191664 2620 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/225bfc45-5212-4b33-86be-ccbb0aca6df4-hubble-tls\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193908 kubelet[2620]: I0625 18:55:24.191688 2620 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-lib-modules\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193908 kubelet[2620]: I0625 18:55:24.191712 2620 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ppjwv\" (UniqueName: \"kubernetes.io/projected/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3-kube-api-access-ppjwv\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193908 kubelet[2620]: I0625 18:55:24.191738 2620 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-host-proc-sys-kernel\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193908 kubelet[2620]: I0625 18:55:24.191762 2620 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/225bfc45-5212-4b33-86be-ccbb0aca6df4-cni-path\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.193908 kubelet[2620]: I0625 18:55:24.191787 2620 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/225bfc45-5212-4b33-86be-ccbb0aca6df4-cilium-config-path\") on node \"ci-4012-0-0-8-5dd8cf1e6e.novalocal\" DevicePath \"\"" Jun 25 18:55:24.200303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-174ab60af89791398bb4b98117719ed38a2243d968f652784480b8948895c508-rootfs.mount: Deactivated successfully. Jun 25 18:55:24.200521 systemd[1]: var-lib-kubelet-pods-225bfc45\x2d5212\x2d4b33\x2d86be\x2dccbb0aca6df4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 25 18:55:24.200697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243-rootfs.mount: Deactivated successfully. Jun 25 18:55:24.200846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e33102758f07b540f3b49a56f11b87cd7e406b8a92fa21eb255f5f17dee01243-shm.mount: Deactivated successfully. Jun 25 18:55:24.201036 systemd[1]: var-lib-kubelet-pods-225bfc45\x2d5212\x2d4b33\x2d86be\x2dccbb0aca6df4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 25 18:55:24.201194 systemd[1]: var-lib-kubelet-pods-c5a152c9\x2dd3fc\x2d48af\x2daec7\x2d8affbd9e2cf3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dppjwv.mount: Deactivated successfully. Jun 25 18:55:24.201354 systemd[1]: var-lib-kubelet-pods-225bfc45\x2d5212\x2d4b33\x2d86be\x2dccbb0aca6df4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d49fsh.mount: Deactivated successfully. Jun 25 18:55:24.372820 kubelet[2620]: I0625 18:55:24.372623 2620 scope.go:117] "RemoveContainer" containerID="95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2" Jun 25 18:55:24.384005 containerd[1448]: time="2024-06-25T18:55:24.382490568Z" level=info msg="RemoveContainer for \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\"" Jun 25 18:55:24.406909 systemd[1]: Removed slice kubepods-burstable-pod225bfc45_5212_4b33_86be_ccbb0aca6df4.slice - libcontainer container kubepods-burstable-pod225bfc45_5212_4b33_86be_ccbb0aca6df4.slice. Jun 25 18:55:24.407728 systemd[1]: kubepods-burstable-pod225bfc45_5212_4b33_86be_ccbb0aca6df4.slice: Consumed 8.928s CPU time. Jun 25 18:55:24.410445 containerd[1448]: time="2024-06-25T18:55:24.410330036Z" level=info msg="RemoveContainer for \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\" returns successfully" Jun 25 18:55:24.426289 kubelet[2620]: I0625 18:55:24.426235 2620 scope.go:117] "RemoveContainer" containerID="95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2" Jun 25 18:55:24.427049 containerd[1448]: time="2024-06-25T18:55:24.426896521Z" level=error msg="ContainerStatus for \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\": not found" Jun 25 18:55:24.429521 systemd[1]: Removed slice kubepods-besteffort-podc5a152c9_d3fc_48af_aec7_8affbd9e2cf3.slice - libcontainer container kubepods-besteffort-podc5a152c9_d3fc_48af_aec7_8affbd9e2cf3.slice. Jun 25 18:55:24.447218 kubelet[2620]: E0625 18:55:24.446775 2620 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\": not found" containerID="95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2" Jun 25 18:55:24.457230 kubelet[2620]: I0625 18:55:24.446995 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2"} err="failed to get container status \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"95d34e52854cadfabd240d5e8fe1567780ab72701b65be463c4ac988635350d2\": not found" Jun 25 18:55:24.458730 kubelet[2620]: I0625 18:55:24.458080 2620 scope.go:117] "RemoveContainer" containerID="345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d" Jun 25 18:55:24.466015 containerd[1448]: time="2024-06-25T18:55:24.465222858Z" level=info msg="RemoveContainer for \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\"" Jun 25 18:55:24.479493 containerd[1448]: time="2024-06-25T18:55:24.479269629Z" level=info msg="RemoveContainer for \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\" returns successfully" Jun 25 18:55:24.479957 kubelet[2620]: I0625 18:55:24.479674 2620 scope.go:117] "RemoveContainer" containerID="71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f" Jun 25 18:55:24.481967 containerd[1448]: time="2024-06-25T18:55:24.481709389Z" level=info msg="RemoveContainer for \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\"" Jun 25 18:55:24.488846 containerd[1448]: time="2024-06-25T18:55:24.488543571Z" level=info msg="RemoveContainer for \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\" returns successfully" Jun 25 18:55:24.490426 kubelet[2620]: I0625 18:55:24.489791 2620 scope.go:117] "RemoveContainer" containerID="a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7" Jun 25 18:55:24.491468 containerd[1448]: time="2024-06-25T18:55:24.491439930Z" level=info msg="RemoveContainer for \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\"" Jun 25 18:55:24.496712 containerd[1448]: time="2024-06-25T18:55:24.496672945Z" level=info msg="RemoveContainer for \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\" returns successfully" Jun 25 18:55:24.496987 kubelet[2620]: I0625 18:55:24.496907 2620 scope.go:117] "RemoveContainer" containerID="df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a" Jun 25 18:55:24.498520 containerd[1448]: time="2024-06-25T18:55:24.498226731Z" level=info msg="RemoveContainer for \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\"" Jun 25 18:55:24.501970 containerd[1448]: time="2024-06-25T18:55:24.501883637Z" level=info msg="RemoveContainer for \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\" returns successfully" Jun 25 18:55:24.502137 kubelet[2620]: I0625 18:55:24.502117 2620 scope.go:117] "RemoveContainer" containerID="55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262" Jun 25 18:55:24.503156 containerd[1448]: time="2024-06-25T18:55:24.503113907Z" level=info msg="RemoveContainer for \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\"" Jun 25 18:55:24.506329 containerd[1448]: time="2024-06-25T18:55:24.506264270Z" level=info msg="RemoveContainer for \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\" returns successfully" Jun 25 18:55:24.506521 kubelet[2620]: I0625 18:55:24.506504 2620 scope.go:117] "RemoveContainer" containerID="345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d" Jun 25 18:55:24.506958 containerd[1448]: time="2024-06-25T18:55:24.506906321Z" level=error msg="ContainerStatus for \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\": not found" Jun 25 18:55:24.507117 kubelet[2620]: E0625 18:55:24.507047 2620 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\": not found" containerID="345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d" Jun 25 18:55:24.507155 kubelet[2620]: I0625 18:55:24.507116 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d"} err="failed to get container status \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"345af657b4d0db9b3b7aaf01855e575984fe4fa77d64a5b6bcb873c27d2a1d6d\": not found" Jun 25 18:55:24.507155 kubelet[2620]: I0625 18:55:24.507138 2620 scope.go:117] "RemoveContainer" containerID="71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f" Jun 25 18:55:24.507320 containerd[1448]: time="2024-06-25T18:55:24.507290152Z" level=error msg="ContainerStatus for \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\": not found" Jun 25 18:55:24.507412 kubelet[2620]: E0625 18:55:24.507391 2620 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\": not found" containerID="71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f" Jun 25 18:55:24.507481 kubelet[2620]: I0625 18:55:24.507415 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f"} err="failed to get container status \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"71112f956061a45a5c6f2ff53ec8f4c56d66446c6ce023fe827d767fea667e8f\": not found" Jun 25 18:55:24.507516 kubelet[2620]: I0625 18:55:24.507483 2620 scope.go:117] "RemoveContainer" containerID="a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7" Jun 25 18:55:24.507735 containerd[1448]: time="2024-06-25T18:55:24.507704350Z" level=error msg="ContainerStatus for \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\": not found" Jun 25 18:55:24.507831 kubelet[2620]: E0625 18:55:24.507802 2620 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\": not found" containerID="a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7" Jun 25 18:55:24.507878 kubelet[2620]: I0625 18:55:24.507830 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7"} err="failed to get container status \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a25fd72813bd80f9e894a4a05b4084e3201ef3ac057b04255cf33463ee86ccf7\": not found" Jun 25 18:55:24.507878 kubelet[2620]: I0625 18:55:24.507847 2620 scope.go:117] "RemoveContainer" containerID="df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a" Jun 25 18:55:24.508365 containerd[1448]: time="2024-06-25T18:55:24.508099972Z" level=error msg="ContainerStatus for \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\": not found" Jun 25 18:55:24.508505 kubelet[2620]: E0625 18:55:24.508249 2620 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\": not found" containerID="df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a" Jun 25 18:55:24.508505 kubelet[2620]: I0625 18:55:24.508456 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a"} err="failed to get container status \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"df601b5c4eb30ac63bfee5398fcae2642b91050041bf5b601d4a274540ef3e8a\": not found" Jun 25 18:55:24.508505 kubelet[2620]: I0625 18:55:24.508482 2620 scope.go:117] "RemoveContainer" containerID="55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262" Jun 25 18:55:24.508904 containerd[1448]: time="2024-06-25T18:55:24.508817087Z" level=error msg="ContainerStatus for \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\": not found" Jun 25 18:55:24.508984 kubelet[2620]: E0625 18:55:24.508963 2620 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\": not found" containerID="55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262" Jun 25 18:55:24.509025 kubelet[2620]: I0625 18:55:24.508986 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262"} err="failed to get container status \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\": rpc error: code = NotFound desc = an error occurred when try to find container \"55c9c79d18fa0f34d96d70405d3819a087a7570280eea3ca8f00bf8f5eb58262\": not found" Jun 25 18:55:24.770426 kubelet[2620]: I0625 18:55:24.769067 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="225bfc45-5212-4b33-86be-ccbb0aca6df4" path="/var/lib/kubelet/pods/225bfc45-5212-4b33-86be-ccbb0aca6df4/volumes" Jun 25 18:55:24.770616 kubelet[2620]: I0625 18:55:24.770516 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a152c9-d3fc-48af-aec7-8affbd9e2cf3" path="/var/lib/kubelet/pods/c5a152c9-d3fc-48af-aec7-8affbd9e2cf3/volumes" Jun 25 18:55:24.910854 kubelet[2620]: E0625 18:55:24.910755 2620 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:55:25.283010 sshd[4163]: pam_unix(sshd:session): session closed for user core Jun 25 18:55:25.294415 systemd[1]: sshd@22-172.24.4.127:22-172.24.4.1:46652.service: Deactivated successfully. Jun 25 18:55:25.299139 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:55:25.299552 systemd[1]: session-25.scope: Consumed 1.183s CPU time. Jun 25 18:55:25.301535 systemd-logind[1430]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:55:25.310489 systemd[1]: Started sshd@23-172.24.4.127:22-172.24.4.1:49362.service - OpenSSH per-connection server daemon (172.24.4.1:49362). Jun 25 18:55:25.313763 systemd-logind[1430]: Removed session 25. Jun 25 18:55:26.630378 sshd[4326]: Accepted publickey for core from 172.24.4.1 port 49362 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:55:26.633329 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:55:26.645562 systemd-logind[1430]: New session 26 of user core. Jun 25 18:55:26.651379 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:55:28.007996 kubelet[2620]: I0625 18:55:28.006754 2620 setters.go:580] "Node became not ready" node="ci-4012-0-0-8-5dd8cf1e6e.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-25T18:55:28Z","lastTransitionTime":"2024-06-25T18:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 25 18:55:28.416603 kubelet[2620]: I0625 18:55:28.411566 2620 topology_manager.go:215] "Topology Admit Handler" podUID="519a96eb-0e70-44a6-b58a-57ae2b317264" podNamespace="kube-system" podName="cilium-cnlp7" Jun 25 18:55:28.422670 kubelet[2620]: E0625 18:55:28.422631 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5a152c9-d3fc-48af-aec7-8affbd9e2cf3" containerName="cilium-operator" Jun 25 18:55:28.422670 kubelet[2620]: E0625 18:55:28.422664 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="225bfc45-5212-4b33-86be-ccbb0aca6df4" containerName="mount-bpf-fs" Jun 25 18:55:28.422670 kubelet[2620]: E0625 18:55:28.422675 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="225bfc45-5212-4b33-86be-ccbb0aca6df4" containerName="mount-cgroup" Jun 25 18:55:28.422857 kubelet[2620]: E0625 18:55:28.422683 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="225bfc45-5212-4b33-86be-ccbb0aca6df4" containerName="apply-sysctl-overwrites" Jun 25 18:55:28.422857 kubelet[2620]: E0625 18:55:28.422690 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="225bfc45-5212-4b33-86be-ccbb0aca6df4" containerName="clean-cilium-state" Jun 25 18:55:28.422857 kubelet[2620]: E0625 18:55:28.422698 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="225bfc45-5212-4b33-86be-ccbb0aca6df4" containerName="cilium-agent" Jun 25 18:55:28.422857 kubelet[2620]: I0625 18:55:28.422725 2620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a152c9-d3fc-48af-aec7-8affbd9e2cf3" containerName="cilium-operator" Jun 25 18:55:28.422857 kubelet[2620]: I0625 18:55:28.422732 2620 memory_manager.go:354] "RemoveStaleState removing state" podUID="225bfc45-5212-4b33-86be-ccbb0aca6df4" containerName="cilium-agent" Jun 25 18:55:28.476703 systemd[1]: Created slice kubepods-burstable-pod519a96eb_0e70_44a6_b58a_57ae2b317264.slice - libcontainer container kubepods-burstable-pod519a96eb_0e70_44a6_b58a_57ae2b317264.slice. Jun 25 18:55:28.533926 kubelet[2620]: I0625 18:55:28.533722 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-cni-path\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.534419 kubelet[2620]: I0625 18:55:28.534359 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-xtables-lock\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.534743 kubelet[2620]: I0625 18:55:28.534663 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/519a96eb-0e70-44a6-b58a-57ae2b317264-hubble-tls\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.535011 kubelet[2620]: I0625 18:55:28.534982 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-hostproc\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.535302 kubelet[2620]: I0625 18:55:28.535261 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/519a96eb-0e70-44a6-b58a-57ae2b317264-cilium-config-path\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.535838 kubelet[2620]: I0625 18:55:28.535753 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/519a96eb-0e70-44a6-b58a-57ae2b317264-cilium-ipsec-secrets\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.536321 kubelet[2620]: I0625 18:55:28.536146 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-cilium-run\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.536321 kubelet[2620]: I0625 18:55:28.536253 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/519a96eb-0e70-44a6-b58a-57ae2b317264-clustermesh-secrets\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.536812 kubelet[2620]: I0625 18:55:28.536477 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-cilium-cgroup\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.537416 kubelet[2620]: I0625 18:55:28.537131 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-bpf-maps\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.537844 kubelet[2620]: I0625 18:55:28.537360 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-host-proc-sys-kernel\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.537844 kubelet[2620]: I0625 18:55:28.537777 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-host-proc-sys-net\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.538299 kubelet[2620]: I0625 18:55:28.538212 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtv9d\" (UniqueName: \"kubernetes.io/projected/519a96eb-0e70-44a6-b58a-57ae2b317264-kube-api-access-jtv9d\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.538391 kubelet[2620]: I0625 18:55:28.538342 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-etc-cni-netd\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.538607 kubelet[2620]: I0625 18:55:28.538458 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/519a96eb-0e70-44a6-b58a-57ae2b317264-lib-modules\") pod \"cilium-cnlp7\" (UID: \"519a96eb-0e70-44a6-b58a-57ae2b317264\") " pod="kube-system/cilium-cnlp7" Jun 25 18:55:28.567305 sshd[4326]: pam_unix(sshd:session): session closed for user core Jun 25 18:55:28.578572 systemd[1]: sshd@23-172.24.4.127:22-172.24.4.1:49362.service: Deactivated successfully. Jun 25 18:55:28.581659 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:55:28.582037 systemd[1]: session-26.scope: Consumed 1.137s CPU time. Jun 25 18:55:28.585231 systemd-logind[1430]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:55:28.592626 systemd[1]: Started sshd@24-172.24.4.127:22-172.24.4.1:49376.service - OpenSSH per-connection server daemon (172.24.4.1:49376). Jun 25 18:55:28.596555 systemd-logind[1430]: Removed session 26. Jun 25 18:55:28.783813 containerd[1448]: time="2024-06-25T18:55:28.782744751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnlp7,Uid:519a96eb-0e70-44a6-b58a-57ae2b317264,Namespace:kube-system,Attempt:0,}" Jun 25 18:55:28.817683 containerd[1448]: time="2024-06-25T18:55:28.817015648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:55:28.817683 containerd[1448]: time="2024-06-25T18:55:28.817086744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:55:28.817683 containerd[1448]: time="2024-06-25T18:55:28.817112633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:55:28.817683 containerd[1448]: time="2024-06-25T18:55:28.817132081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:55:28.841949 systemd[1]: Started cri-containerd-970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662.scope - libcontainer container 970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662. Jun 25 18:55:28.871630 containerd[1448]: time="2024-06-25T18:55:28.871558212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnlp7,Uid:519a96eb-0e70-44a6-b58a-57ae2b317264,Namespace:kube-system,Attempt:0,} returns sandbox id \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\"" Jun 25 18:55:28.886918 containerd[1448]: time="2024-06-25T18:55:28.886785400Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 25 18:55:28.923428 containerd[1448]: time="2024-06-25T18:55:28.923300678Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dd7ee74fedbe5fc9c619f61c5e467ea731e2c9a18aa0f7324dc1d0230795d5c1\"" Jun 25 18:55:28.924270 containerd[1448]: time="2024-06-25T18:55:28.924051285Z" level=info msg="StartContainer for \"dd7ee74fedbe5fc9c619f61c5e467ea731e2c9a18aa0f7324dc1d0230795d5c1\"" Jun 25 18:55:28.954126 systemd[1]: Started cri-containerd-dd7ee74fedbe5fc9c619f61c5e467ea731e2c9a18aa0f7324dc1d0230795d5c1.scope - libcontainer container dd7ee74fedbe5fc9c619f61c5e467ea731e2c9a18aa0f7324dc1d0230795d5c1. Jun 25 18:55:29.005286 containerd[1448]: time="2024-06-25T18:55:29.005219212Z" level=info msg="StartContainer for \"dd7ee74fedbe5fc9c619f61c5e467ea731e2c9a18aa0f7324dc1d0230795d5c1\" returns successfully" Jun 25 18:55:29.015378 systemd[1]: cri-containerd-dd7ee74fedbe5fc9c619f61c5e467ea731e2c9a18aa0f7324dc1d0230795d5c1.scope: Deactivated successfully. Jun 25 18:55:29.103668 containerd[1448]: time="2024-06-25T18:55:29.103475890Z" level=info msg="shim disconnected" id=dd7ee74fedbe5fc9c619f61c5e467ea731e2c9a18aa0f7324dc1d0230795d5c1 namespace=k8s.io Jun 25 18:55:29.103668 containerd[1448]: time="2024-06-25T18:55:29.103570290Z" level=warning msg="cleaning up after shim disconnected" id=dd7ee74fedbe5fc9c619f61c5e467ea731e2c9a18aa0f7324dc1d0230795d5c1 namespace=k8s.io Jun 25 18:55:29.103668 containerd[1448]: time="2024-06-25T18:55:29.103591961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:55:29.419854 containerd[1448]: time="2024-06-25T18:55:29.418867329Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 25 18:55:29.450089 containerd[1448]: time="2024-06-25T18:55:29.449562966Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6543a6f4ab9caafd5faa27bfa2fb398c367ae152cc88f02da08363a00c89a300\"" Jun 25 18:55:29.453041 containerd[1448]: time="2024-06-25T18:55:29.452261560Z" level=info msg="StartContainer for \"6543a6f4ab9caafd5faa27bfa2fb398c367ae152cc88f02da08363a00c89a300\"" Jun 25 18:55:29.529236 systemd[1]: Started cri-containerd-6543a6f4ab9caafd5faa27bfa2fb398c367ae152cc88f02da08363a00c89a300.scope - libcontainer container 6543a6f4ab9caafd5faa27bfa2fb398c367ae152cc88f02da08363a00c89a300. Jun 25 18:55:29.563186 systemd[1]: cri-containerd-6543a6f4ab9caafd5faa27bfa2fb398c367ae152cc88f02da08363a00c89a300.scope: Deactivated successfully. Jun 25 18:55:29.564889 containerd[1448]: time="2024-06-25T18:55:29.564852038Z" level=info msg="StartContainer for \"6543a6f4ab9caafd5faa27bfa2fb398c367ae152cc88f02da08363a00c89a300\" returns successfully" Jun 25 18:55:29.598327 containerd[1448]: time="2024-06-25T18:55:29.598244846Z" level=info msg="shim disconnected" id=6543a6f4ab9caafd5faa27bfa2fb398c367ae152cc88f02da08363a00c89a300 namespace=k8s.io Jun 25 18:55:29.598327 containerd[1448]: time="2024-06-25T18:55:29.598320089Z" level=warning msg="cleaning up after shim disconnected" id=6543a6f4ab9caafd5faa27bfa2fb398c367ae152cc88f02da08363a00c89a300 namespace=k8s.io Jun 25 18:55:29.598327 containerd[1448]: time="2024-06-25T18:55:29.598331200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:55:29.912885 kubelet[2620]: E0625 18:55:29.912782 2620 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 25 18:55:30.184645 sshd[4337]: Accepted publickey for core from 172.24.4.1 port 49376 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:55:30.187429 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:55:30.197495 systemd-logind[1430]: New session 27 of user core. Jun 25 18:55:30.207224 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:55:30.424180 containerd[1448]: time="2024-06-25T18:55:30.424089151Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 25 18:55:30.474315 containerd[1448]: time="2024-06-25T18:55:30.473765650Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa\"" Jun 25 18:55:30.478401 containerd[1448]: time="2024-06-25T18:55:30.478288524Z" level=info msg="StartContainer for \"fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa\"" Jun 25 18:55:30.533090 systemd[1]: Started cri-containerd-fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa.scope - libcontainer container fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa. Jun 25 18:55:30.571627 containerd[1448]: time="2024-06-25T18:55:30.571583196Z" level=info msg="StartContainer for \"fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa\" returns successfully" Jun 25 18:55:30.578287 systemd[1]: cri-containerd-fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa.scope: Deactivated successfully. Jun 25 18:55:30.607836 containerd[1448]: time="2024-06-25T18:55:30.607724675Z" level=info msg="shim disconnected" id=fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa namespace=k8s.io Jun 25 18:55:30.607836 containerd[1448]: time="2024-06-25T18:55:30.607823132Z" level=warning msg="cleaning up after shim disconnected" id=fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa namespace=k8s.io Jun 25 18:55:30.607836 containerd[1448]: time="2024-06-25T18:55:30.607835716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:55:30.649301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fec5308ca9db98dc7f1282d3981198ef5d376af5f4883ea985ca3f40bf7acbfa-rootfs.mount: Deactivated successfully. Jun 25 18:55:30.824368 sshd[4337]: pam_unix(sshd:session): session closed for user core Jun 25 18:55:30.843352 systemd[1]: sshd@24-172.24.4.127:22-172.24.4.1:49376.service: Deactivated successfully. Jun 25 18:55:30.850481 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:55:30.855878 systemd-logind[1430]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:55:30.876051 systemd[1]: Started sshd@25-172.24.4.127:22-172.24.4.1:49392.service - OpenSSH per-connection server daemon (172.24.4.1:49392). Jun 25 18:55:30.880229 systemd-logind[1430]: Removed session 27. Jun 25 18:55:31.440298 containerd[1448]: time="2024-06-25T18:55:31.439968670Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 25 18:55:31.493491 containerd[1448]: time="2024-06-25T18:55:31.493327066Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11\"" Jun 25 18:55:31.501025 containerd[1448]: time="2024-06-25T18:55:31.499265804Z" level=info msg="StartContainer for \"0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11\"" Jun 25 18:55:31.556100 systemd[1]: Started cri-containerd-0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11.scope - libcontainer container 0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11. Jun 25 18:55:31.583782 systemd[1]: cri-containerd-0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11.scope: Deactivated successfully. Jun 25 18:55:31.593254 containerd[1448]: time="2024-06-25T18:55:31.593132667Z" level=info msg="StartContainer for \"0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11\" returns successfully" Jun 25 18:55:31.623483 containerd[1448]: time="2024-06-25T18:55:31.623411013Z" level=info msg="shim disconnected" id=0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11 namespace=k8s.io Jun 25 18:55:31.623483 containerd[1448]: time="2024-06-25T18:55:31.623474874Z" level=warning msg="cleaning up after shim disconnected" id=0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11 namespace=k8s.io Jun 25 18:55:31.623483 containerd[1448]: time="2024-06-25T18:55:31.623485184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:55:31.635793 containerd[1448]: time="2024-06-25T18:55:31.635704962Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:55:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:55:31.649370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c8f105cfff5d6e95850cd5aa022a0621218ec689f75b1ef6d9e319f3d22bb11-rootfs.mount: Deactivated successfully. Jun 25 18:55:32.272974 sshd[4567]: Accepted publickey for core from 172.24.4.1 port 49392 ssh2: RSA SHA256:GTMdf4BYrRkxlHDeNNmEHREHZ8wXAacYhogvVSC0ogs Jun 25 18:55:32.275194 sshd[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:55:32.284292 systemd-logind[1430]: New session 28 of user core. Jun 25 18:55:32.289055 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:55:32.451333 containerd[1448]: time="2024-06-25T18:55:32.451227821Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 25 18:55:32.657395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599044957.mount: Deactivated successfully. Jun 25 18:55:32.680801 containerd[1448]: time="2024-06-25T18:55:32.680315521Z" level=info msg="CreateContainer within sandbox \"970764bf73ccde09059031502d684dd1e273d7e78f1719036018641c6988f662\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43f0c1640430498006dbc0f3978a48d96888ddc598f7c96203c5d25ba6105c85\"" Jun 25 18:55:32.682613 containerd[1448]: time="2024-06-25T18:55:32.682020535Z" level=info msg="StartContainer for \"43f0c1640430498006dbc0f3978a48d96888ddc598f7c96203c5d25ba6105c85\"" Jun 25 18:55:32.753110 systemd[1]: Started cri-containerd-43f0c1640430498006dbc0f3978a48d96888ddc598f7c96203c5d25ba6105c85.scope - libcontainer container 43f0c1640430498006dbc0f3978a48d96888ddc598f7c96203c5d25ba6105c85. Jun 25 18:55:32.792856 containerd[1448]: time="2024-06-25T18:55:32.792764310Z" level=info msg="StartContainer for \"43f0c1640430498006dbc0f3978a48d96888ddc598f7c96203c5d25ba6105c85\" returns successfully" Jun 25 18:55:33.484459 kubelet[2620]: I0625 18:55:33.484306 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cnlp7" podStartSLOduration=5.484268372 podStartE2EDuration="5.484268372s" podCreationTimestamp="2024-06-25 18:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:55:33.483877308 +0000 UTC m=+148.952286511" watchObservedRunningTime="2024-06-25 18:55:33.484268372 +0000 UTC m=+148.952677524" Jun 25 18:55:34.266076 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:55:34.327016 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jun 25 18:55:37.590071 systemd-networkd[1368]: lxc_health: Link UP Jun 25 18:55:37.599645 systemd-networkd[1368]: lxc_health: Gained carrier Jun 25 18:55:37.623925 systemd[1]: run-containerd-runc-k8s.io-43f0c1640430498006dbc0f3978a48d96888ddc598f7c96203c5d25ba6105c85-runc.NEcEQB.mount: Deactivated successfully. Jun 25 18:55:39.323172 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jun 25 18:55:39.957072 systemd[1]: run-containerd-runc-k8s.io-43f0c1640430498006dbc0f3978a48d96888ddc598f7c96203c5d25ba6105c85-runc.woK5pe.mount: Deactivated successfully. Jun 25 18:55:44.825028 sshd[4567]: pam_unix(sshd:session): session closed for user core Jun 25 18:55:44.831230 systemd[1]: sshd@25-172.24.4.127:22-172.24.4.1:49392.service: Deactivated successfully. Jun 25 18:55:44.835632 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:55:44.839444 systemd-logind[1430]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:55:44.842999 systemd-logind[1430]: Removed session 28.